C U T B H P N

Final Projects - Spring 2019

Version 1.0, Presentations throughout the Spring Semester

Purpose:

These projects are intended to: You will pick one of the Projects listed below to implement, either individually, or with one or two other students, and will make three presentations for your project: Note that you may elect to define your own project, subject to the Instructor's approval. A student defined project should be compatible with the project goals stated at the top of the Purpose section.
You will find GitHub to be an interesting source of project ideas.
Note: If you do elect to use github for ideas, please note that you won't get much credit for taking an existing github project, making minor tweeks, and presenting that as your project.

Technologies:

Here is a list of interesting technologies - all ones I would enjoy working in - for you to consider for your final projects.
Technology Discussion Resources
C++
  • C++11 provides strong support for programming with abstractions.
  • C++14 and C++17 provide support for template-meta programming.
  • Strong types are an interesting application of C++ templates.
C#, .Net
  • C# is very similar to Java.
  • The .Net libraries provide strong support for the Windows platform.
  • Windows Presentation Foundation (WPF) and Windows Communication Foundation (WCF) are very well engineered frameworks for build graphical user interfaces and communication channels, respectively.
JavaScript JavaScript now runs in many applications in addition to browsers, due to the introduction of node.js. Node is an HTTP message dispatching framework, based on the chrome V8 engine. Node programs are written in JavaScript but it is also relatively easy to load and call C++ libraries from node programs.
Python Python is a scripting language with modern syntax and the ability to act as glue for code written in other languages.
HTTP Hyper Text Transport Protocol is a messaging protocol that is the basis for web communications, and is now being used in many other applications. It was developed by Tim Berners-Lee, CERN, in 1989.
HTML5 HTML5 is the latest version of the Hyper Text Markup Language. It supports making fluid designs, hosting significant computation and storage on the client browser, and natively supports drawing and video.
WebWorkers Web workers are Javascript objects that run named JavaScript files, using new threads. Workers communicate with the main thread using messages. Workers are not allowed to access the DOM and some of the Window methods and properties, but may call web services, interact with some data.
WebRTC & SignalR
  • WebRTC is an open source framework that supports real-time communication between a web server and connected clients.
  • SignalR is a .Net library that supports pushing content from a web server to connected clients in real-time.
Node.js Node.js is a JavaScript framework the runs in the context of an execution engine based on Chrome's V8 engine. Essentially the Node developers hosted V8 in platform executables instead of a browser. Node runs a single-threaded apartment model, e.g., concurrent clients can enqueue messages for a single thread in the node engine to execute.
Express.js Express.js is a web framework that runs on node.js. It provides a small set of web server functionalities in a simple, easy to install framework.
Angular.js Angular is a JavaScript framework that supports a client-side Model View Controller structure. MVC provides an organized way to handle multiple views and data models.
React.js React is a JavaScript framework for building web UIs. It supports changing view content without reloading the view's page.
MongoDB MongoDB is a NoSql database, holding its data as JSON elements. It is open-source, easy to install and use, and is widely used for professional applications.
firebase Firebase is a platform, owned by Google, for web chatting, messaging, authorization, and web hosting. It is proprietary, but provides limited developer access at no cost.
Electron.js Electron is an interesting framework for building desktop applications with JavaScript, HTML, and CSS.
Atom.js Atom is a code editor build with the Electron Framework.
Cinder - C++ Cinder is a C++ library for developing graphics, audio, video, and scientific computing for desktop applications. It wraps the well know OpenGL library, and proports to simplfy designs compared to their OpenGL counterparts.
MEAN & MERN stacks
  • MEAN is an ackronym for Mongo, Express, Angular, and Node. It is a set of components for building fast and simple web applications. The resource link, on the right, supports installing all those components in one pass.
  • MERN substitutes React for Angular in the MEAN stack and includes Redux and Webpack.
Containers A container is a stand-alone, executable package for a software component. The container contains everything necessary to run the code, e.g., code, run-time, tools, libraries, and settings.
Alexa Services Supports voice-enabling connected applications with speakers and microphones. Provides access to Alexa built-in capabilities.
BlockChain Blockchain is a growing collection of records (blocks) which are linked and crypographically secured.
Technologies

Project List for Spring 2018:

I'm still thinking about this list and may add a few more projects by the time classes start.
Note: Students are encouraged to make modifications to these project descriptions to suit their own tastes and interests. Treat them as "straw-men" that get you started. If you do make changes, please briefly discuss the changes with me. I will approve anything that meets the stated purpose at the top of this page, and that is interesting and has adequate content. I also encourage you to layout interesting goals for your project, even if you are not sure that you can implement all of them. You won't be penalized for reaching high and not attaining all of your goals. You will be penalized for laying out very modest goals and not achieving those.
Table of Contents:
  1. Tiny HTTP Client and Server (TINYHTTP):

    The idea:

    Building block for C++ message-passing communication channels, servers, and other internet enabled programs.
    The goal of this project is to implement small and simple components, HTTPClient and HTTPServer, used to build message passing communication channels in C++ on both Windows and Linux. Those can be used to communicate between any two platforms that support native code, e.g., Windows, Linux, Unix, Macs, etc. The most important measure of success for this project is to create a small code base to efficiently send messages with very low complexity for the using applications.
    This project will use sockets (you may use a sockets package found in the Repository) to send HTTP style messages. Each message has a header consisting of text lines - the first is a command, and subsequent lines are attributes that describe the message with key:value pairs, e.g., content-length:543 implies that the header is followed by a block of 543 bytes of data. The HTTPClient and HTTPServer components will be developed for both Windows and Linux platforms. Another measure of success is the ratio of common code to that which is unique for each platform. Ideally that ratio should be large.
    You may re-implement one of your class projects that used WCF or another socket implementation. Alternately, you may collaborate with one or two other students working on a project that requires message-passing communication.
    You will find a prototype for HttpServer here. Since the prototype does a lot of the implementation for windows, you will be expected to extend that in one or more of the following ways:
    • port to linux - already done, but needs some additional testing (I don't know of any bugs, but ...)
    • interoperate with java which uses the Apache HttpClient and HttpServer components
    • Use as the basis for supporting microservices
    Note: The prototype server uses a single-threaded apartment model, which works fine for asynchronous communication, but does not for synchronous communication like standard HTTP. The issue is that when making synchronous calls the caller waits for a response, and you have to either simply send an acknowledgement, or else use a multi-threaded apartment model for the server. I'll discuss this briefly in class.
    Links:
    TOC
  2. Cross-Platform GUI using Chrome and Tiny HTTP Server (CPGUI):

    The idea:

    GUI built from Chrome using HTML5 and Tiny HTTPServer that will run on Windows and Linux with no changes other than recompiling the C++ HTTPServer.
    The .Net framework on Windows provides an elegant UI framework called Windows Presentation Foundation (WPF). On Linux there are a lot of GUI frameworks2 that work well but don't provide the declarative programming style that WCF exposes through Xaml. The intent of this project is to provide a new alternative that works for Windows, Linux, and Macs, based on use of the Chrome browser and one or more HTML5 pages to support declarative layout of the user interface.
    Each application that uses this new framework will host a tiny HTTPServer, as developed in the first project, and, on startup will start Chrome with an appropriate HTML5 page and Javascript library. Communication between the application and chrome is based on Ajax calls, e.g., a button click will send a message to the application to do something that usually will not result in downloading a new page to Chrome. The application performs the requested action and returns any information that needs to be displayed, probably using JSON or XML.
    The result is that we can build cross-platform applications using the support libraries, developed in the preceeding project, application code that is common to both platforms, and GUI code that is also common to both platfroms.
    Links:
    TOC
  3. Cross-Platform Library (CPLIB):

    The idea:

    Platform specific libraries with common interfaces to support common code across multiple platforms.
    This project extends an existing C++ library of low-level program support components for both Windows and Linux. The idea is that each component provides the same interface on all supported platforms, but uses the operating system APIs to implement the interfaces. The current library has classes to manipulate files and directories for both Windows and Linux. To that, this project will add, for both Windows and Linux:
    • FileSystem library (already complete for both Windows and Linux - see FileSystem-Windows). It would be interesting to reimplement based on std::FileSystem (C++17 : Microsoft C++ supports an experimental version). That has an interface based on directory_entry and iterators, which is harder to use than my FileSystem-Windows. I used essentially the same design as the .Net System.IO classes.
    • Sockets library, probably based on the library provided in Handouts/Repository. This will focus mostly on porting the existing Windows library to Linux and testing the result.
    • Process library to create, start, and communicate with new processes. This will use the Windows and Linux APIs to implement a Process class with some of the capability of the .Net Process class.
    • Thread Pool (already complete for Windows - See Repository, this may run on Linux as-is, but needs testing), using C++11 threading and locking constructs. This will involve creating a specified number of threads that block on a thread-safe blocking queue waiting to dequeue lambdas that define the work to be done1.
    • Process Pool, supports spawning a fixed number of processes that communicate with a mother process to get tasks to run, with the added benefit of process isolation.
    • Communications library, perhaps based on collaborating with students working on the preceeding project.
    • Graphical User Interface library, based on collaborating with students working on the next project.
    Links:
    TOC
  4. C++ Testharness using Process Pool (CPPTH)

    The idea:

    This project is similar in function to SMA's Project #4, Fall 2016, and OOD's Project #4, Fall 2018. It is implemented in C++, using a process pool to enable execution of libraries in isolation, instead of using C# and .Net AppDomains. It will still support high performance execution, but also enables testing of libraries from different technologies, e.g., C++, C#, and Java.
    If you completed OOD in the Fall of 2018, to do this project, you will have to extend it in interesting ways. Please discuss this with me before making this choice.
    The goal of this project is to support testing of more than one type of code. It does this by defining a base process class that supports process creation and communication with the main TestHarness process. Derived classes are defined to support a particular type of code execution, e.g., managed C# or Java code, or native C and C++ code.
    This project will use C++ sockets to build a message-passing communication system, and C++ code to develop the TestHarness, Process class for loading and executing C and C++ library code, and a demonstration client.
    The process class for testing C# code will be written in C++, but is intended to support starting and communicating with a process written in C# that loads C# libraries, and may use a C++\CLI wrapper to communicate with the Testharness.
    The process class for testing Java code will be written in C++, but is intended to support starting and communicating with a process written in Java, that loads Java libraries and will use Java sockets to communicate with the Testharness.
    TOC
  5. Repository using Pluggable Components (REPO)

    The idea:

    Build a cross-platform code and document repository that uses pluggable components to define core Repository services.
    The goal of this project is to support management of code and documents using pluggable components for:
    • Storage:
      The mechanics for managing directories and placement of files. Means for identifying the root of dependency chains, identifying Systems, Modules, and Packages. Are categories used? Are they based on namespaces?
    • Dependency Information:
      How dependency information is stored and accessed, e.g., with metadata XML files or with a NoSql database.
    • Versioning:
      How are versions tracked? How does a user explore the version sequence of a specified package?
    • Ownership:
      Are packages owned by a single developer? by a group? by any registered Repository user?
    • Checkin and Checkout Process:
      How are open (incomplete) checkins and closed (complete) checkins identified and managed.
    • Browsing:
      What kind of information is supplyed to the user to browse the Repository structure. This is related to how storage is managed.
    • Building:
      When and how packages are built into libraries. How are Graphical User Interface (GUI) codes handled. How are codes from different languages handled.
    Components will be defined using interfaces and object factories. There needs to be a component installation interface that allows a component to register with, and be activated by, the Repository. Since some of these services need to use other services, there needs to be a way for them to communicate. One way is to use function call interfaces, but that means that every component may need to know the interfaces of many other components. That isn't a good idea as it promotes a very tight coupling between components.
    One nice solution to this problem is to use message-passing communcation between components, as well as between the Repository and its Clients. This way each component has to support a "PostMessage(Message)" interface where the parameters of the call are encoded into the body of the message. The message has a "from" and "to" property so that the Repository's message dispatcher knows where to send the message, and the message recipient knows where to send a reply. Yes, you can support properties in C++.
    One interesting question is how the Repository will be hosted. Here some options:
    • C++ Process
    • Web interface using the MEAN stack. Here, MEAN stands for MongoDB, Express.js, Angular.js, and Node.js.
    • Asp.Net MVC
    Note: I've implemented something very similar to this using C# and the .Net framework, so please don't use that platform for your project.
    TOC
  6. REST Web Services as Program Components (REST):

    The idea:

    Modern version of Distributed COM - build systems from services found on the local network.
    REpresentational State Transform (REST) services are web services that use the HTTP commands (Get, Post, Put, Delete, ...) as verbs that each describe an action applied to objects that are identified by a url. Unlike conventional web services REST does not use an embedded infrastructure like Simple Object Access Protocol (SOAP). It's messages are passed in clear text unless transmitted over SSL. REST applications are usually simpler and faster than conventional web services.
    This project explores the decomposition of applications into a set of REST services and a small amount of glue code to complete the application. You might think of this as a UI application that delegates all of its processing to this set of REST services.
    This allows applications to share a lot of code with other applications that may be remotely located. Think of the service collection as a repository of functionality that may be used across a federation of servers in some enterprise application.
    Links:
    TOC
  7. MicroServices (MICROSERV):

    The idea:

    Compose programs from small service components with message-passing communication.
    Microservices are small, self-contained, building blocks used to create larger systems. The application aggregates a number of services to implement as much of its functionality as is practical. The application code instantiates services it intends to use and communicates with them using messages.
    If the services reside in seperate processes, then the design uses socket-based asynchronous messaging with HTTP style messages. If a service resides in the same process as the application code, then messages are sent directly to the receiver's queue.
    Application design focuses on selecting a set of services and configuring messages to support application activities. This project, of course, will have to build an initial set of services, based on a specified service architecture.
    You could select a modestly complex project and implement with services, as described above, perhaps building a directory synchronizer. Alternately, you could join a team that was building a TestHarness or Repository and implement that with microservices.
    Links:
    TOC
  8. Virtual Display System (VDS):

    The idea:

    Build an interface for collaboration, sharing documents, code, webcams, and sketchpad.
    In a Software Modeling & Analysis final project (Fall 2012) we explored the use of a large display as a medium for collaboration in a Software Development Server Federation. This project implements a prototype for this kind of display system.
    VDS is an application that drives a large display3 for use in collaboration systems. The purpose of this project is to develop a framework and rendering process to display simultaneously web-cam windows, document windows, drawing surfaces, directory views, and information derived from queries into NoSql Databases or some repository.
    For this project the displayed resources would be source code text, pdf files, sketch pads, skype windows, all drawn from a local repository. It could be implemented as a browser displaying an HTML5 page that supports adding and removing elements, changing view sizes and locations, and saving a current session in some persistant format like XML.
    Links:
    TOC
  9. Virtual Repository Server (VRS):

    The idea:

    Build server template that is clonable and supports data replication - used by teams working on large projects.
    The Virtual Servers defined for this project have three properties:
    • Support a clone operation that builds a clone of an existing server on any Windows or Linux platform that invokes the source server's clone operation. The invoker can request that some subset of the source server's content be replicated on the target server.
    • Use a message-passing communication system based on the Tiny HTTPClient and HTTPServer from the first project.
    • Implement a message dispatcher mediator that makes adding new server functionality exceptionally easy.
    One obvious application of Virtual Servers is in Software Collaboration Systems. A project might have Repository, TestHarness, and Collaboration servers. Any one of the teams using the collaboration system may wish to replicate a Repository or TestHarness for their own team activities for initial development, before checking in software to the Project Repository.
    Links:
    TOC
  10. Remote Data Acquisition and Visualiation using NodeJS (NODEJS):

    The idea:

    Explore a new and popular technology that uses server-side JavaScript to build high performance programs.
    NodeJS is a Javascript framework that focuses on spawning activities on a remote server. The NodeJS framework is based on the Chrome V8 Javascript engine and has a hosting process that allows Javascript applications to run in an executable outside the usually browser framework. It is based on a Single Threaded Apartment model5 that reads messages out of a queue and executes the message's request on a single thread. That has some interesting performance implications.
    A lot of environmental sensing systems use remote sensors that communicate with a central server to capture a stream of values about environmental variables, e.g., heat, acidity, chemical composition, etc. and a NodeJS based system would be ideal for managing that sensed data.
    Doing that will probably be impractical for a project in this class, but we could substitue data streams gathered from social media or simulated environmental models.
    It would be interesting to host Node.js on two or more Raspberry Pi boards, one or more for monitoring devices and one to display the results.
    Links:
    TOC
  11. Cloud Computing (CLOUDCOMP)

    The idea:

    Either set-up an open-source private cloud to learn how those things work, or use an existing technology like Azure or AWS to build a portfolio project.
    Use the open-source Ubuntu Cloud which wraps OpenStack in a Ubuntu wrapper, or OpenStack Cloud directly to set up a demo cloud and develop a simple demo application. Alternately use Microsoft Azure or Amazon AWS via a developer account to set up and explore a cloud application.
    Another interesting possibility is to use Node.js to build a private cloud.
    Links:
    TOC
  12. Personal Website using the MEAN Stack (MEAN):

    The idea:

    Build a portfolio project to demonstrate at job interviews using open-source technologies.
    There is currently a lot of interest in using the MongoDB, ExpressJS, AngularJS, and NodeJS (MEAN) stack for developing responsive, scalable websites.
    In this project you will explore these technologies by implementing a personal website that has the following features:
    • Resume Page
    • Repository page with descriptions and links to your project code submitted during your program and/or developed out of personal interest. This should include a mechanism to view individual pages of source code.
    • A story page, telling your story with well formated text and pictures, e.g., your Undergraduate Program, your Program at Syracuse University, your work experience, etc.
    • Other features that use specific properties of the MEAN stack.
    You could subscribe to a hosting service and provide a link on your resume for prospective employers to view.
    You could use a MERN stack instead, e.g., use the javascript library React instead of, or in addition to, Angular.
    Links:
    TOC
  13. Visio-Style Drawing Tool (DRAW):

    The idea:

    Create a subset of Visio style functionality - useful for building distributed collaboration systems, e.g., a distributed whiteboard.
    Develop a drawing tool that provides a (small) subset of the facilities of Visio, but is easier to use. For example:
    • Templates for specific visual objects like classes, packages, activity diagrams, connectors, etc.
    • The ability to move visual objects and/or the break points for connecting lines, i.e., the places where a connection line forms a right angle.
    • Connection points on visual objects that support maintaining the connections as a visual object is moved in the diagram.
    • Annotating visual objects with text and dropping text on other parts of a diagram.
    • You might, time permitting, implement a communication channel that allows remote collaboration using the drawing tool.
    There are two interesting ways to implement this tool:
    • Using HTML5 & Javascript. This would require you to implement a Javascript object model that wrapped the various visual objects.
    • Using Cinder & C++11. Cinder is a C++ framework that wraps OpenGL, making it significantly easier to use. Here, you would implement a C++ object model for visual components.
    This tool has an obvious application to the Visual Display System.
    Links:
    TOC
  14. Graph Visualization (GRAPHVIZ):

    The idea:

    Develop means to visualize large directed graphs - perhaps from dependency relationships in large software systems.
    Graphs - collectons of vertices connected with possibly directed edges - are very useful data structures for capturing dependency relationships, geographic relationships, corporate and team structures, etc.
    For some applications the graphs may be very large, e.g., dependencies between packages in a large software system. The easiest way to extract useful information from large graphs is to view them in some well organized structure, perhaps on a computer screen. However, when the graphs are large it is not obvious how to organize them properly, e.g., place vertices in a view in which there aren't a lot of crossing edges. The larger the graph the more difficult it is to lay out the graph manually, e.g., by drag and drop.
    This project focuses on routing and displaying possibly large graphs by simulating a physical system in which each vertex has a repulsive force on every other vertex, perhaps inverse square law, tending to push apart the vertices, and edges have a restraining force pulling the two end-point vertices together, thus preventing infinite expansion of the graph. Here's an example bl.ocks.org.
    You would also need damping forces, proportional to the speed of motion of the vertices, to prevent oscillation in their motions. The way this routing algorithm behaves is determined by the constants used for the inverse square law repulsions, linear attractions, and linear retarding forces.
    You will find a C++ graph class in the repository. So using Cinder to render the graphs would be a fairly direct way to approach the project implementation. Alternately you could port the Graph class to C#5 and use WPF for rendering.
    Links:
    TOC
  15. Software Visualization (SWVIZ):

    The idea:

    Using Graph Visualization (preceding project) develop means to explore large software systems.
    This project depends on the results of the preceeding project. Here you build the mechanics to determine dependency relationships between packages, and use that information to draw a dependency graph, where each vertex is associated with metadata that describes its package.
    This supports traversal and exploration of large software systems. It could be very useful in a code repository for QA activities and for importing large projects into an existing repository.
    Here all of the package information, including child dependency relationships is stored in a NoSql Database. We build one of those last semester in CSE681 - Software Modeling and Analysis in Project #2. You will find my implementation of that project in the Repository.
    Links:
    TOC
  16. Audible logger (SPEECHLOG):

    The idea:

    Build a logging facility that writes to a collection of output streams, where one of the streams uses text to speech processing. That could provide demonstrations of program operation via spoken tags.
    It's a good idea to build loggers that allow programs to enqueue log items asynchronously, allowing the program operation to proceed quickly, while a background thread writes out the logged items from the queue. If you provide the child thread access to a vector of output streams, it writes each logged item into every registered stream in the vector. One of those streams could be a text to speech translator. The windows API provides that capability, and there may be open source libraries for that, as well.
    In this project you implement a logging library in C++ that provides at least three levels of logging:
    • normal results
    • demonstration outputs
    • debugging outputs
    You may wish to use a static thread-safe blocking queue so that logs written anywhere in the program are handled in the same logging process.
    Links:
    TOC
  17. Using Web Applications on the Desktop (WEBDESKTOP):

    The idea:

    There are a lot of interesting technologies designed for the web. This project explores embedding one or more of those web technologies in an application running on the desktop.
    I might use a technology like that to make a WPF-based demo of my web site, perhaps building a slide show of pages and menu dropdowns, with commentary in an adjacent panel. You can think of a lot of interesting applications that could use this idea.
    What this project attempts to do is to build an application that works like a conventional browser, but doesn't have any of the browser chrome, e.g., borders, toolbars, buttons, ... Instead, it embeds a browser control in a GUI like WPF so you can tailor its functionality to suit a particular application. Mozilla labs (now shuttered) built a demo of that, and posted to github. Microsoft built a browser control for IE, then built an updated version for Edge. This is a "bleeding-edge" project in that the Mozilla project is now inactive, and the Edge browser is being redesigned to host the Chrome V8 engine.
    Links:
    TOC
  18. Directory Synchronization using GO Language (GO):

    The idea:

    Explore using "classes" in a powerful language that doesn't have classes.
    GO is an interesting new language developed by Google engineers. Unfortunately it does not support classes, but does have many interesting and powerful facilities: built in concurrency with coroutines, interprocess communication, and a built in web server - sort of like the C Language on steriods.
    In this project you implement a directory synchronizer using GO. Directory synchronizers are used to:
    • Update existing files in the target directory from newer files in a source directory on another machine.
    • Optionally it will also copy files from the source directory that don't exist on the target machine.
    This will give you the opportunity to explore the way communication channels are used, how concurrency works, and the ease of use of the GO syntax.
    Links:
    TOC
  19. Systems for Continuous Integration (SCI)
    Note: This project is too large for a Distributed Objects Final Project, but it illustrates where some of the other projects (Repository and TestHarness) are headed.

    The idea:

    Modern processes for developing large software systems (hundreds to thousands of packages) may have goals to:
    • Provide for continuous integration through automated testing that attempts to discover, when changes are made, breakage or performance issues at the earliest feasible time.
    • Support careful management of software baselines using repositories with embedded control mechanisms to ensure that a software base does not become corrupted with poorly designed and error-prone code.
    • Provide effective baseline browsing capability to support learning, design review, and quality assurance.
    • Support development processes in which the current software baseline is always working, continuously adding more capabilities as the project evolves.
    • Manage effectively a large number of configuation items.
    The first of these goals is likely to require, when we checkin a modified package, testing of all of the parent packages that depend on the changed package, and occasionally testing all of the ancestors of the new package. This implies that we need to determine the dependencies, and extract for test builds, just those depending packages and their dependencies. It also implies that developers provide test drivers for each package that is stored in the baseline.
    The second goal is usually implemented with a configuration management system like git. Git doesn't store dependency information natively, and doesn't support extraction of individual packages in a simple way. The git model expects to clone a repository, create modifications, and checkin the changes. Furthermore, it encourages branch and merge operations that are used to allow multiple developers to collaborate on making changes to large packages. Is this a good idea? To easily mechanize the first goal it makes sense to factor code into many relatively small packages - small enough that merging is not an essential operation because only one developer is working on each package.
    Achieving the third goal requires, again, a relatively fine-grained decomposition of code into packages with dependency information so that we can start browsing at a subsystem root and, by following dependencies and looking at embedded documentation and code, begin to understand how the subsystem works.
    The fourth goal can be supported using either coarse or fine-grained code decomposition strategies.

    Software Integration Issues:

    The way we implement integration depends on part on how we decompose software into packages.
    • Course-grained decomposition:
      This strategy divides software into relatively large packages with potentially several classes per package which may have fairly large functions. Package sizes might consist of around 2500 lines of code or more. Large packages may have several developer "owners" that collaborate on modifications and additions using branch and merge techniques. The project manager git works very well in this environment.
      Pros:
      Relatively few packages to manage for modest size projects, making management and deployment easier.
      Cons:
      Large packages are hard to understand, hard to test, and require merging of modifications from multiple developers.
    • Fine-grained decomposition:
      This strategy divides software into relatively small packages with often a single class per package with fairly small functions. Package sizes might consist of around 500 lines of code. Small packages may have a single developer "owner" so modifications and additions use branches, but seldom require merging. The project manager git doesn't work well in this environment. It doesn't support retrieval of package dependency information, and it is awkward to extract single packages from a git repository.
      Pros:
      Small packages are easier to understand, easier to test, and don't require merging of modifications because there is a single developer. Tests can be defined for each small package, making it easier to find and understand defects.
      Cons:
      Many packages to manage for even modest size projects, making management and deployment difficult without significant automation. Managing dependency information is required.

    Resources:

    This project considers two, rather different, systems for supporting continuous integration:
    • Microsoft's Team Foundation Server:
      This is a supported product that provides configuration management using either Team Foundation Version Control or git, automated builds, and testing and release management. It is intended to be a back-end for Visual Studio or Eclipse IDEs.
    • Software Collaboration Federation (SCF):
      This is an idea, that has been partially realized in a series of projects in CSE681 - Software Modeling and Analysis, and CSE687 - Object Oriented Design. The intent is to use this idea as a straw man for what may be a desireable set of functionality for large scale developement. This project will use (SCF) as a reference for what may be an effective platform and use that to question, and perhaps modify, the way that Team Foundation Server supports development.
    The Goal of this project is to evaluate Team Foundation Server's use in support of large projects, and to develop advice about process, tools, and configurations appropriate for its use in this context. One way to do this is to download a sizeable open-source project of interest and add some new functionality. Alternately, we could use one of these DO class projects as the controlled baseline. TOC

References:


Footnotes:

  1. This will be discussed in class.
  2. Ultimate++, CodeBlocks, wxWidgets, Qt, GTK+, ...
  3. That might be a large touch screen monitor, large screen TV, or in the future, wall size displays.
  4. Another alternative would be to use C++/CLI and WCF for the rendering, since C++/CLI code can interact with native code like the existing Graph class.
  5. STAs are used in the COM technology and will be discussed in class (this comment does not imply that you have to use COM in this project).


What you need to do and know:

In order to successfully develop project #1 and one of these projects you will need to:
  1. Know how to use the C++11 language and standard libraries.
  2. Have Visual Studio 2017 installed on your machine.
  3. Set up a virtual machine and a Linux distribution. I will be using VirtualBox and Ubuntu.
  4. Install g++ version 5.3 or CLang 3.3. We'll discuss this in class.
  5. Learn the Microsoft COM technology, as discssed in class.

CST strip