Software Modeling & Analysis - Fall 2016

Project #2 - Test Harness

Version 1.3,
Due Date: Wednesday, October 5th

Purpose:

One focus area for this course is understanding how to structure and implement big software systems. By big we mean systems that may consist of hundreds or even thousands of modules and perhaps several million lines of code.

In order to successfully implement big systems we need to partition code into relatively small parts and thoroughly test each of the parts before inserting them into the software baseline1. As new parts are added to the baseline and as we make changes to fix latent errors or performance problems we will re-run the test sequence for those parts and, perhaps, for the entire baseline. Because there are so many packages the only way to make this intensive testing practical is to automate the process. How we do that is the topic of the projects for this year.

In this project we will be creating a Test Harness - an automated test tool that runs a specified set of tests on multiple packages2. Each test execution runs a test driver on a small set of packages, recording pass status and perhaps logging execution details. Test requests are submitted to the Test Harness via a request message naming one or more test driver executions.

Requirements:

Your Test Harness project:
  1. (1) Shall be implemented in C# using the facilities of the .Net Framework Class Library and Visual Studio 2015, as provided in the ECS clusters.
  2. (3) Shall implement a Test Harness Program that accepts one or more Test Requests, each in the form of an XML file that specifies the test developer's identity and the names of a set of one or more test drivers with the code to be tested. Each test driver and the code it will be testing is implemented as a dynamic link library (DLL). The XML file names one or more of these DLLs to execute.
  3. (2) The Test Harness shall enqueue Test Requests and execute them serially in dequeued order1.
  4. (2) Each test driver derives from an ITest interface that declares a method test() that takes no arguments and returns the test pass status, e.g., a boolean true or false value2. Some interface also declares a getLog() function that returns a string representation of the log3.
  5. (3) Test execution shall for each Test Request run in an AppDomain that isolates test processing from Test Harness processing. Because we use a child AppDomain to run test executions, an unhandled exception in the test execution will not affect Test Harness processing.
  6. (2) Test logs are stored3 by the Test Request's AppDomain and may be retrieved using the getLog() function.
  7. (2) The Test Harness shall store test results and logs for each of the test executions using a key that combines the test developer identity and the current date-time.
  8. (2) The Test Harness shall support client queries about Test Requests4 from the Log storage.
  9. (2) Shall be accompanied by a test executive that clearly demonstrates you've met all the functional requirements #2-#8, above. If you do not demonstrate a requirement you will not get credit for it even if you have it correctly implemented.
  10. (1) Shall contain a brief text document that compares this implementation with the concept developed in Project #1. Does this project fully implement its concept? Was the original concept practical? Were there things you learned during the implementation that made the origninal concept less relevant?


  1. In Projects #3 and #4 we will make this process concurrent. That is, each test request, as described by a single XML file, will run on its own thread in its own AppDomain.
  2. We will discuss in class how a complex test that may require arguments to execute can be activated through this simple interface.
  3. Note that an AppDomain created by the test harness calls, for each test in the test request, the function bool test() and stores the test result and test log. Your design must support some way for the client to retrieve the log, using getLog(). For Project #4 your design has to ensure that the client does not try to access the log before it has been stored by the child AppDomain. Since Project #2 is single threaded that won't be a problem, but you should think about handling that situation when you prepare your OCD for Project #4.
  4. You will satisfy these requirements if your client query can get two kinds of information. One is simple summary information, e.g., author, date, and success status (true or false) for each test in the log. This assumes you have one log for each test request. The second type of query shows the entire log. Since I added this footnote after the due date for your first OCD we cannot expect your concept to work this way.

What you need to know:

In order to successfully meet these requirements you will need to:
  1. Write C# code and use basic facilities of the .Net Framework.
  2. Use AppDomains.
  3. Read and Write XML files.
We may use MongoDB in Project #4. One of the TAs is looking at this to see if that would be manageable along with everything else you need to do for that Project. You might want to look a bit at the MongoDB documentation to get ready for that.