T B H P N

Project #2 - Lexical Scanner

Using State-Based Tokenizer

Version 1.2
Due Date: Wednesday, October 3rd, 2018
Project #2 helper files

Purpose:

One focus area for this course is understanding how to structure and implement big software systems. By big we mean systems that may consist of hundreds or even thousands of packages1 and perhaps several million lines of code. We won't be building anything quite that large, but our projects may be considerably bigger than anything you've worked on before.

In order to successfully implement big systems we need to partition code into relatively small parts and thoroughly test each of the parts before inserting them into the software baseline2. As new parts are added to the baseline and as we make changes to fix latent errors or performance problems we will re-run test sequences for those parts and, perhaps, for the entire baseline. Managing that process efficiently requires effective tools for code analysis as well as testing. How we do that code analysis is illustrated by the projects for this year.

The projects this Fall focus on building software tools for code analysis. We will emphasize C# code but want our tools to be easily extendable to other similar languages like C++ and Java.

Code analysis consists of extracting lexical content from source code files, analyzing the code's syntax from its lexical content, and building an Abstract Syntax Tree (AST) that holds the results of our analysis. It is then fairly easy to build several backends that can do further analyses on the AST to construct code metrics, search for particular constructs, evaluate package dependencies, or some other interesting features of the code.

You will find it useful to look at the Parsing blog for a brief introduction to parsing and code analysis.

In this second project we will build and test a lexical scanner in C# that consists of two packages:

Requirements:

Your Scanner Solution:
  1. Shall use Visual Studio 2017 and its C# Windows Console Projects, as provided in the ECS computer labs.
  2. Shall use the .Net System.IO and System.Text for all I/O.
  3. (2) Shall provide C# packages for Tokenizing, collecting SemiExpressions, and a scanner interface, ITokCollection.
  4. (4) Shall provide a Tokenizer package that declares and defines a Toker class that implements the State Pattern3 with an abstract ConsumeState4 class and derived classes for collecting the following token types:
    • alphanumeric tokens
    • punctuator tokens
    • special one5 and two6 character tokens with defaults that may be changed by calling setSpecialSingleChars(string ssc) and/or setSpecialCharPairs(string scp)7.
    • Single-line comments returned as a single token, e.g., //
    • Multi-line comments returned as a single token, e.g., /* ... */
    • quoted strings8
  5. (1) The Toker class, contained in the Tokenizer package, shall produce one token for each call to a member function getTok().
  6. (4) Shall provide a SemiExpression package that contains a class SemiExp used to retrieve collections of tokens by calling Toker::getTok() repeatedly until one of the SemiExpression termination conditions, below, is satisfied.
  7. (5) Shall terminate a token collection after extracting any of the single character tokens: semicolon, open brace, closed brace. Also on extracting newline if a '#' is the first token on that line.
  8. (2) Shall provide a facility providing rules to ignore certain termination characters under special circumstances. You are required to provide a rule to ignore the (two) semicolons within parentheses in a for(;;) expression9.
  9. (2) The SemiExp class Shall implement the interface ITokenCollection with a declared method get().
  10. (5) Shall include an automated unit test suite that exercises all of the special cases that seem appropriate for these two packages10.

  1. In C#, a package is a single source code file that contains:
    • prologue, providing a name, brief descriptive phrase, author information, and environment information
    • description of the package's responsiblities and required files
    • maintenance history
    • class definitions
    • a main function that implements construction tests for all the defined code
  2. A software baseline is the set of all code considered to be part of the current system, excluding experimental code that individual developers are working on.
  3. https://en.wikipedia.org/wiki/State_pattern
  4. You don't have to use the ConsumeState name. In the demo code I used TokenState.
  5. Special one character tokens: <, >, [, ], (, ), {, }, :, =, +, -, *
  6. Special two character tokens: <<, >>, ::, ++, --, ==, +=, -=, *=, /=, &&, ||
  7. You don't have to use these names, but if you use other names, the names should make it obvious what the functions do.
  8. The text "abc" becomes a 5 character token "abc".
  9. This will be discussed in class.
  10. This is in addition to the construction tests you include as part of every package you submit.

What you need to know:

In order to successfully meet these requirements you will need to know:
  1. Basics of the C# language: C# tutorial - PROGRAMIZ
  2. How to implement a simple class hierarchy. This will be covered briefly in lecture #3 and in more detail later.
  3. The .Net Containers.
  4. How to use Visual Studio. We will discuss this in one of the Help Sessions.