Updated Operational Concept Document

Jim Fawcett
CSE681-OnLine Software Modeling & Analysis
Summer 2017

Table of Contents:

Foreword:

This sample document is an Operational Concept Document (OCD) that describes concepts used to design and implement a Pluggable Repository. Explanation of purposes for development of the Repository and methods used for its design are part of the OCD document and will be explained in the body.
This particular document also has two purposes, to describe the concept of the Sample Projects #1, #2, #3, and #4, but also to provide commentary on how you will build OCDs for your projects. When we describe the project purposes we will present both the purposes for a product that will serve an engineering organization, and also the academic purposes for these projects.
The results of our project efforts in this course are prototypes for a finished product. The prototypes we develop will be essentially functionally complete, but won’t have the extensive polishing and testing that a commercial product would have. We stop where the academic merit of additional work is no longer worth its cost in time and focus.
The four sample projects:
  1. Operational Concept Document for a Pluggable Repository
  2. Local Repository
  3. Communication infrastructure and Graphical User Interface
  4. Remote Pluggable Repository
Are all focused on creating a final Pluggable Repository prototype. You might think of them as agile-programming sprints, each adding some significant functionality to what will become the final product.
Occasionally, in the document body, we will provide commentary that is intended for instruction, not for directly describing the project concept. These comments will be placed in a callout box like that shown below.
Instructional comments that are not part
of the OCD proper will be formatted like this.

Executive Summary:

The Pluggable Repository is intended to be a customizable storage and control mechanism for packages in an evolving software baseline. It provides dependency-based storage and retrieval, and supports multiple versions for each stored package. Furthermore, it provides categorized storage for packages that support effective search and browsing in a large baseline that may contain thousands of packages.
The Repository, at startup, loads a set of policy components that provide most of its functionality. We customize Repository operations by provided one or more customized policy libraries.
Repository clients support baseline browsing through package dependency trees, and can extract the entire dependency tree contents by naming the root package of the tree. The browsing experience is enhanced by viewing metadata for each of the viewed packages.
The primary development risk is concerned with ease of use. The Repository and its Clients have to be designed to work smoothly together and hide a lot of the internal Repository operations from users.
Its size and complexity are manageable. We estimate that its implementation will need little more that one package for each of: the Repository proper, each of its individual policies, communication infrastructure, and the Client graphical user interface.
The Remote PluggableRepository is a good example of what an "Industrial Strength" project is like. It isn't large by professional standards, but certainly is for an academic project.
Your projects won't be quite this big, nor are they expected to have this level of polish. I spent about half my development time getting the user interface to work smoothly1, and to work out bugs with communication and local processing.
But, your projects will have multiple packages, require threads and communication, like this one, and a significant amount of testing. I think you may, at times feel overwhelmed, but when you finish, a sense of satisfaction in completing an interesting, but tough, job.
Finally, I think you will find the Pluggable Repository code is a rich source of information about how C# is used, how WPF and WCF work, and what you have to do to test your code efficiently.
  1. See the Appendix for details.

Introduction:

The acronym OCD stands for Operational Concept Document. Its purpose is to make you think critically about the architecture, design, and implementation of a project before committing to code. It also serves to publish your concept to the development team, which for this course is you.
One focus area for this course is understanding how to structure and implement big software systems. By big we mean systems that may consist of hundreds or even thousands of packages and perhaps several million lines of code.c
In order to successfully implement big systems, we need to partition code into relatively small parts that are easy to understand and easy to test. This is important because we need to thoroughly test each of the parts before inserting them into the project's software baseline. In order to do that effectively we need to understand each of the packages , their modifications, and dependencies on other packages.
As new parts are added to the baseline and as we make changes to fix latent errors or performance problems in existing packages, we will be creating new versions of existing packages as well as creating new packages. When we make changes, we will run test sequences for those parts, all the parts that depend on the changed parts and, occasionally, for the entire baseline.
Because there are so many packages, some with several versions, we need a Repository to store the baseline and some semi-automated processes for checking in packages, for versioning, and for making queries on the Repository contents. These sample projects all focus on the structure, design, and implementation of a Repository for storing code baselines.
One obvious question is why would we do this since there are many well-established code control systems, e.g., git, Subversion, etc.?
In order to implement a system for continuously integrating new code into complex systems, our goals are to detect breakage as soon as feasible after submitting a change to the baseline. That means that we want to test, not only the changes, but also all packages that directly depend on the changes.
The most commonly used code control systems do not make the needed dependency information available, and sometimes make it awkward to extract just the packages needed for an integration test. Our Repository will be designed to provide fine-grained, dependency-aware, package management, to support continuous integration testing.
In this and following projects we will be creating a Repository - a semi-automated storage mechanism that provides pluggable policies for:
  1. File management
  2. Version control
  3. Package ownership
  4. Checkin and Checkout
  5. Package browsing
The term "pluggable" means that we can substitute one version of a policy with another without requiring any changes to the Repository code. We will see that means that policies will need to be implemented as components - a software part that has an interface and object factory.
For this project, we develop and document, here, the concept for creating a Repository that we will then implement in Sample Projects #2, #3, and #4.

Concept:

The Pluggable Repository is composed of an application that loads libraries, at runtime, and provides all of the major functionalities required to manage a code baseline. This structure allows a software engineering organization to customize the repository for its own style of product management.
Organizations usually have a process they use for building and managing a baseline which may differ from the way others do that. The Repository has a library for each of its major activities, allowing customization by simply using a different existing library or creating a new library for any of its functions.
Another part of the concept is the use of dependency-based storage. That allows a user to extract a package and all the other packages it depends on simply by naming the root package of that dependency subtree. This makes frequent building and testing of parts of the baseline much easier because we extract all of the parts needed for the build with one extraction request, without getting a lot of packages that are not needed.
Repository Storage Structure - Metadata and Files
The Repository represents each package with an XML metadata file. The metadata has a reference to its primary source code file, and also has references to the metadata files for source code files on which it depends. Essentially, the repository treats each metadata file as a package and its source code reference is simply the implementation of that package.
Each metadata file is a node in a virtual dependency forest. Each tree in the forest is a set of packages that are related through dependencies. Note that there is no runtime data structure holding the tree information. The repository navigates a tree by loading and analyzing each metadata file as indicated by following dependency references in the metadata.
Should that turn out to be a performance issue, we could always build an in-memory structure to hold this information, probably in a Dictionary, where each key is a package and the associated value is a list of child packages. Until navigation is shown to be a performance issue, we elect to avoid adding that bit of additional functionality, just to keep the implementation as simple as we can.

Uses:

There are two kinds of uses we need to address: uses of a finished version of the final project, and instructional uses for each of the projects.
Uses of the finished product are defined by its use in a software development organization:
  1. Developer's daily activities:

    Developers use the repository more frequently than any other user group. The repository will be part of their daily work-flow and it is critically important that the repository operations make their work more efficient, not less. It must be easy to check in packages, extract package dependency graphs, and view repository contents. Developers spend a lot of time browsing through their own code and that written by others on which they depend or which they need to support.
    Impact on design:
    Navigation through large sets of packages should be relatively painless, and at any point in the browsing process the user should be able to examine the code and package details. Because the baseline may contain thousands of packages, it will be important to be able to group packages into sensible categories and allow navigation within the packages of each group. Dependency relationships between packages are an important aspect of a system’s design, and it is important that developers can easily navigate through a dependency graph. For that we will need to construct metadata for each package that describes its dependencies on other packages .
  2. Quality Assurance work flow:

    QA personnel extract large parts of a baseline to create builds for regression testing. They also need to run tools, across the entire baseline, that analyze conformance to code standards and look for structural defects.
    Impact on design: Because of the size of baselines we expect to manage in the Repository, it is important that scanning the baseline and extracting packages for builds be made as efficient as we can, within the constraints of cost and complexity. It would also be very useful to provide a tool interface that allows QA to automatically schedule and report the results of code quality scans.
  3. Manager’s need for progress information:

    A Program Manager is charged with delivering a large complex system that meets its customer obligations, satisfies code quality standards expected by the developing organization, and meets the allocated program schedule. To do that a manager reviews QA reports, and looks at code commit and testing activity. For example, just before a customer review, a manager would expect to see commit activity at a low level (very little new code entering the baseline) and a moderate amount of regression testing (QA ensuring a stable build demonstration). However, any hot-spots of developer commit and testing activity may indicate some part of the baseline hasn’t reached the level of maturity needed for a customer review.
    Impact on design:
    It will be important to provide logging of commit activity and tools to extract summaries of that information on a scheduled basis. Engineering organizations that develop large code baselines will need to support continuous integration testing in some form of test harness. The test harness would also provide logging information for managerial consumption. Our focus here is just on the Repository, but we should attempt to implement a design that will easily integrate with a separate Test Harness facility. That implies a programmatic extraction process and acceptance and storage of testing logs.
  4. Customer Code Maintenance:

    A Program Manager needs to supply to his customers, summary level progress information at each review. This is likely to be a subset of the information the manager uses to access weekly progress. When the product is delivered, it needs to be packaged for deployment. One very effective way to deploy a project is to provide the customer with a Repository with a subset of functionality suitable for code maintenance, assuming the customer’s engineering staff will be maintaining the product.
    Impact on design:
    There should be no additional impact on design to satisfy the need for progress information for customer reviews. That is already provided for managerial use. However, if we deploy the repository on product delivery, we will want to ensure that proprietary functionality is not part of that delivery. Since the concept already supports Repository use of Pluggins, we can simply configure any proprietary functionality as pluggins that are not part of the deployment package.
Instructional uses have different actors, the developing student and instructor, with different needs. Each student needs to thoroughly test each part of the developing project and, at the end, demonstrate each requirement to the instructor. The instructor will run each project and look at its code implementation for evaluation.
  1. Student project development:

    The student developer needs to partition the repository functionality into relatively small, simple parts, each of which have built-in testing functionality to demonstrate successful development.
    Impact on design:
    Each repository part will need to provide test functions that can be run as part of stand-along testing of the part, and later as one link in a chain of test processing.
  2. Instructor evaluation:

    Students are responsible for demonstrating that they meet each of the requirements in the Project Statement, e.g., Sample Project #2. They will need to log test results to the console, being careful to provide information, not just raw data. That is, the outputs should respond to each Project Statement requirement, using language that is easy to understand, and with data that is as brief as possible, while still demonstrating all parts of the requirement.
    Impact on design:
    Logging facilities should be configured to allow turning on or off demonstration outputs. Ideally, this logging will be a part of those facilities used by managers in the final product.

Structure:

The Pluggable Repository system includes a Repository process with multiple Client processes to support user actions on the Repository contents. Client processing and message-passing communication are parts of Sample Project #4, and are represented by the PluggableRepoClient and PCommService packages.
There has been one significant change in the processing concept between the initial OCD and the system described in this document. Originally the Pluggable Repo client was intended to provide user access to Repository functionality, almost entirely contained in the RepoServer. There was only one set of repository processing, in the server. The client software simply helped the user access and use that functionality.
However, as development proceeded, it became clear that it would be very useful to have each client provide local repository functionality, and the server would then become simply a container for checked-in components. There were two major advantages to this revised concept:
  1. Each client could work off-line, and synchronize later when connected.
  2. Clients would not have to suffer communication latencies while performing routine code control operations.
  3. A client could hold, and manage, only that code important for the user, without needing to look at, or manage, a lot of code of no interest to that user.
  4. Code revisions could be exercised thoroughly by a local client, and not synchronized with the server until it is clear that the code works well.
  5. The icing on the cake is that this concept is significantly easier to implement, mostly because far fewer kinds of transactions need to occur between client and server.
The Repository structure for Sample Project #4, as implemented, is illustrated in the package diagram, below.
Pluggable Repository Client and Server Packages
This diagram contains all the Pluggable Repository packages and illustrates their calling dependencies. We will discuss, here, the responsibilities of the most important top-level packages.
  1. RepoServer:

    This package provides remote storage for checked in packages. It's purpose is to support sharing versioned components between clients, and delegates most of the Repository functionality to the PluggableRepoClients.
  2. PluggableRepoClient

    Provides the user's Graphical User Interface, implemented with Windows Presentation Foundation (WPF), and holds an instance of PluggableRepo repository.
  3. PluggableRepo package:

    This is the core repository package, responsible for loading libraries for all of the pluggins functionality, and establishing run-time dispatching of Repository operations. It is part of the PluggableRepoClient, e.g., a local repository.
  4. Browse package:

    The operations of this package was intended to provide support for Client access using a Graphical User Interface, mediated by message-passing communication between Client and Repository. In this Project, its functionality has been implemented by other parts, and we still simply provide a loadable shell that does almost nothing. It is here in case, at some later time, we want to customize browsing operations. It may well turn out that it will become deprecated and be removed.
  5. Checkout Package:

    Checkout provides the ability to extract a dependency tree by simply naming the root package of the tree. We implemented a single-owner policy for each package, so there is no additional functionality required for checkout. It is simply a dependency tree extractor.
  6. Ownership Package:

    The ownership policy determines which users are allowed to commit new versions of a package to the Repository contents. For this implementation we use a simple single ownership policy: everyone owns everything, so there are no access restrictions. I don't believe this is a good practice, so in a later version I will change that to single ownership. All of the hooks are in place to make that relatively easy. In that single owner policy only the owner of a package can commit new versions. Note that, under this policy any user may checkout a package. They simply cannot modify it and commit the changed code, unless they are the package owner.
  7. Checkin Package:

    Checkin is responsible for accepting for storage a user supplied file, building metadata from user supplied information, and moving the file and metadata into a specified Repository category folder. Checkins are either open or closed. Contents of an open checkin may be changed at any time without changing the version number. However, once a checkin is closed, its contents become immutable. Any text changes to a closed checkin file may only be effected by a new checkin. Closed-ness is indicated by a property in the metadata for the checkin.
  8. Storage Package:

    Storage handles all of the file copying and moving between a staging area and Repository category folders. It also provides information, on request, about the contents of each category. Storage is widely used, and has more implementation functionality than most of the other packages.
  9. Version Package:

    This package is used by checkin and storage to manage versioning of Repository contents. Our concept for this is to provide integer version numbers for everything in the Repository, including metadata XML files, and source code files. We do that be modifying the name of the files with a trailing version number, e.g., someFile.cs.3 or someMetadata.XML.2. You can see that in action in the discussion of views, in the appendix.
  10. MetaData Package:

    The Repository views a metadata file as representing a package. The metadata contains a reference to the primary file, and to each of its package dependencies. It also contains descriptive information about the primary file for use in browsing repository contents.
  11. Relationships Package:

    Maintains parent-child relationships for the local repository. Note that definition of relationships is up to the user. That happens at checkin, and the user has just completed work on a package and knows its dependency relationships. So, defining relationships is a manual process - not automated by the repository. We could have elected to automate that, but the investment in doing so would have been large, and the payoff small in almost all use cases.
  12. FileSynch Package:

    On command from a user, the PluggableRepoClient sends a list of files from a specified directory, to the RepoServer. The server compares that list with all the files in its own directory of the same name. It replies with two messages - one specifies all the files it has that were not on the client's list, e.g., things the client needs, and all the files on the client's list not in its directory, e.g., things it needs. The RepoClient makes that information available to the user in a Remote View and provides the user an opportunity to send and receive the appropriate files. I had originally expected to automate that process, but decided against doing that. Automatic updates would be very likely to send packages that were local checkins, but not ready for prime-time, to the server, causing problems for other users.
  13. PCommService

    Provides message-passing communication using Windows Communication Foundation (WCF). Our implementation is quick and flexible. One factor in that flexibility is that each end of the communication channel has a message dispatcher. That is implemented with a Dictionary, that takes a message name as a key, and associates with that key, processing required for the message. This makes it very easy to add new messages and the processing associated with them. This design strategy worked very well on both the client and server.
  14. IPluggable and TestUtilities Packages:

    These are implementation details, widely used by other packages, but not important to the concept, and will not be discussed further.
  15. FileNameEditor

    Provides file naming services to several packages. Its purpose is to avoid duplication of similar code in a few packages, but is not important for understanding the Pluggable Repository concept.

Tasks and Activities:

The main tasks for the RepoServer are:
  1. Deliver a list of its directory names to a client on request. That happens when a RepoClient starts up, and also may happen when a client wants to synchronize files.
  2. Deliver a list of file names in a specified directory on client request. That always happens when a client wants to synchronize files.
  3. Accept checkin package, e.g., source file and metadata XML file, accepted when a client synchronizes files.
  4. Deliver a package, e.g., source file and metadata XML file, delivered when a client synchronizes files.
The PluggableRepoClient activities are much more complex. The top-level details are:
  1. Viewing local files and metadata using Navigation View
  2. Checking out a package and its dependency decendents, using Checkout View
  3. Checking in a package, e.g., soure file and metadata XML file, using Checkin View
  4. Connecting to a RepoServer and viewing message traffic. Can also send test messages, used for debugging. These things are done using the Message View. If there is no connection, the Remote View when needed, will make a connection to the RepoServer specified in the Messages View.
  5. Adding or removing local directories using an Admin View. This view has very limited functionality, but will, in a later version, support configuring server directory service, if the user has admin priviledges.
  6. Synchronizing source and metadata files by transferring files between the RepoClient and RepoServer, using the Remote View. The only transactions between a RepoClient and its RepoServer are activities that occur using the Remote View, and to a lesser extent, with the Messages View.
More details about these activities can be found in the Appendix.

Issues:

The Pluggable Repository has a simple operational model, e.g., store, and make accessible, source code packages and their dependencies. However, its implementation is relatively complex and there are a number of design decisions we needed to think critically about before committing to code:
  1. Versioning:

    Within the Repository, we’ve elected to represent versions by appending a version number to each file name, and use those versioned names in all metadata references. However, when building binary versions of the code we must strip off the version numbers. That is easy enough to do. However, for users to scan dependency trees, version numbers are needed.

    Proposed solution:

    We elected to provide local repositories for all user browsing. These are expected to hold only the information needed by a specific user. We download files with synchonizing operations between a RepoClient and RepoServer, retaining version numbers. Those are only stripped off when a local Checkout operation occurs.
  2. Performance:

    There are two activities that have a significant impact on performance: browsing repository contents, and the related scanning of dependency trees for extraction. Both of these relate to the use of metadata to represent packages and dependencies.

    Proposed solution:

    We are doing all browsing in the local repository, so there is no performance problem for browsing. We elected to use on-demand loading of metadata files, e.g., read metadata information, as needed, from a metadata file. We should monitor the resulting performance. If that turns out to be an issue, we can build a run-time Dictionary of packages and dependency information, or even to use an existing storage mechanism like MongoDB. Some of that information is already cached, and moving to an in-memory metadata facility would not be too difficult.
  3. Ease of use:

    If the Repository isn’t easy to use, developer productivity will suffer, and we wasted a lot of time and effort to build it.

    Proposed Solution:

    To make the Repository easy to use we need to make its operations reliable and as simple as is reasonably possible. We did that by making the Repository parts as simple as we could, and by extensively testing them as we built. Also, the client interface acts as a mediator, hiding a lot of complexity interacting with the Repository. See the Appendix for a fairly complete description of the Client UI views and their activities.
  4. Traceability:

    Managers need the ability to view commit histories and timelines, to assess progress of the current project.

    Proposed solution:

    If we elected to save metadata information in an auxiliary database, then it would be easy to capture commit events and query for commit histories. We will elect, instead, to extract that information, on demand, from the metadata itself. Each metadata file contains the time of creation, and it will be relatively easy to build a history scanner to extract views into that information. Note that gathering this information doesn’t happen very often, perhaps once a day, so performance isn’t really an issue for this operation. This has not been implemented in the current version, but is planned for a future version.
  5. Size of the Repository contents:

    A production Repository will need to hold, perhaps, thousands of packages. How will a user find a particular package?
    Solution: We divided the Repository storage into a number of categories, each of which holds only a modest number of packages, supporting search and retrieval operations. Each category will be implemented with a sub-folder within the storage.
    In retrospect, this seems inadequate. The single layer of directores, with one directory for each category of stored items, with just files in the category directory, is just too-course grained. I plan, very soon, to implement a scheme where categories still are the main organizing strategy, but allow both files and lower-level directories in each category.
    I don't forsee any significant difficulties implementing that structure. It will just take a bit of time to work out the details.

Risks:

We want to identify risks associated with the building and use of our Pluggable Repository. We do that to design to minimize risk, and to consider cost-benefit of building the Repository.
  1. Productivity:

    The impact on productivity of using the Repository is the dominant risk factor. If we don’t design well and implement carefully, the Repository could be clumsy to use and might reduce productivity rather than improve it.
    Mitigation: Use of pluggable components makes it easy to replace any one should it prove to make operations too complex and difficult to use. We should allocate schedule time to polish the Client/Repository operations to improve their ease of use.
  2. Security:

    The Repository will hold contents with proprietary value. We want to protect that from unauthorized access, to prevent theft and malicious changes by competitors.

    Mitigation:

    We elected to defer security to the perimeter defenses employed by using organizations. This decision was based on academic use. Since this course does not focus on security concerns, implementing secure operations in our Repository would use time and effort better spent on the other activities. A commercial product would surely need some effective access control.

Conclusions:

A prototype for the Pluggable Repository concept was implemented with reasonable effort and schedule resources. It is likely to improve developer and quality assurance productivity, and, with some relatively small additions, make progress information available to managers and customers. But it will take daily use to evaluate just how well all of that really works.
An initial feasibility prototype was implemented quickly and provides a very good vehicle for assessing impact on software development productivity. I plan to use it for my own software development activities, after extending the repository storage to multiple levels of directories.

Appendix: UI Views:

Checkout View

The CheckOut Tab is used to retrieve a package and, optionally, its decendents. The retrieved files have had version numbers removed, so they can be placed directly in a Visual Studio solution for building.
When the CheckOut View opens for the first time it loads a list of categories. Double clicking on a category loads the files from that category. If you select a metadata file, its decendents are shown in the lower list. Clicking on the CheckOut button copies the root file and metadata to the StagingStorage directory. If the CheckOut Decendents button is checked (the default) all the package's descendents are copied as well.

Checkin View

The CheckIn Tab is used to checkin files from the user's workspace. That builds package metadata, and copies versioned source and metadata into repository storage.
When the CheckIn View opens for the first time it loads a list of categories. Double clicking on a category loads the files from that category. The first time a category is opened, that specifies where the package will be checked in. You then browse the user workspace and select a file to checkin.
If a file with that name exists in the repository, it's metadata will be loaded into the Description and Children list in the view. If not, those displays are empty.
You then can select children (direct dependencies) from the repository folders, switching to other categories if needed (that won't change the target category). Double clicking the file places it in the children list. You can remove a child by double clicking on that child.
If you then click the CheckIn button, a metadata file is constructed and given a version number. If the last checkin of that file was open, the version number remains the same and you simply modify that version.
If a checkin is closed, it is immutable, and any changes result in a new version. In either case the file and its metadata are copied into the target category, selected at the beginning of this checkin activity. Note that the target category is shown in the textbox beside the Show Categories button.

Messages View

The Messages Tab is used to select a RepoServer address and port, connect to it, and to view message traffic. You can also send test messages. That facility was used primarily for debugging, but is also a nice way to learn how the communication process works.
The Message View opens with default address - local host - and port - 8080, but you can change either of those. You click the Listen button to start listening for connect requests. You can't unlisten, as that tends to cause problems on Windows platforms. When you stop listening, Windows still binds to the previously used port, and any attempt to start listening again on that port will throw an exception. You can, however, disconnect and reconnect to the same listener or to another.
There are several types of test messages that you can send. When you do that, you will see, on the RepoServer console, its reception of the message, and for some messages, its replies.
You don't really need to use the Message View for anything other than to start listening. The Remote View will also trigger connections if it is asked to communicate with the server.

Admin View

The Admin Tab is the least developed of all the Views. It enables three things:
  1. Adding a new category
  2. Removing a category if there are no files
  3. Refreshing the parent-child relationships cache by reading and processing all the metadata files in the repository
Eventually the Admin View should be used to configure, and administer, the RepoServer, using a client local to the server. That should require PluggableRepository administrator priviledges. None of that has been implemented in this version.

Remote View

The Remote View Tab is used for most communication with the RepoServer. It enables three things:
  1. Viewing the contents of the RepoServer
  2. Comparing contents with the local Repository
  3. Selectively synchronizing remote RepoServer storage with local RepoClient storage
Initially, in the Remote View Tab, the Remote Categories and Files list is empty. If we click on Show Remote Categories button, the client sends a getCategories request to the server, and this view gets populated with the remote categories.
If we then double click on the same category in both the local and remote category lists, the client retrieves its files from that category and populates the local list. It also sends to the server a getFiles request, which results in a reply with the files in that category on the server, then used to populate the remote list.
If we then click the Show Synch button, the client sends a synch request to the server. That request contains a list of the local files in the selected category.
The server then compares the incoming file list with its files in the same category. It sends back two replies, one with the files in the server category not on the client's list, implying that the client should retrieve them. The second reply message has a list of the files in the client's list not on the server, implying the client should send them.
The RepoClient uses those two replies to mark files in the local list that should be sent to the server, and mark files in the remote list that should be retrieved from the server.
Note that the client is not obligated to do that. It may be that the user has no interest in some of the files on the server, in that category, and may feel that the files it has in its local storage are not yet ready for prime-time, and should not be sent to the server.
However, usually, the client will manually synchronize both sets of files, simply by double clicking on a marked file. That sends either a getFile request, or simply sends to the server the clicked on file. Note that if the client double clicks on an unmarked file in either the remote or local lists, nothing happens.
In the screenshot, above, you see the results of the user clicking on the Show Synch button, e.g., two messages are sent to the server, one to get the list of local files needed by the server - synchRemote - and one to get the list of remote files needed by the client - synchLocal
Double clicking on a "get from server" results in a sendFile message, the file is transferred to the local client, and the mark "get from server" is removed.
Double clicking on a "send to server" file results in client sending an acceptFile message, and transmission of the file from client to server.
Navigation View
The Navigation Tab is used to examine contents of the Repository, e.g.,
  • Categories list
  • versioned files and metadata in a category
  • source code and metadata for a specified file
In files view, double clicking on a file creates a new window containing the file's source code and content of its metadata file.
When the Navigation View opens for the first time it loads a list of categories. Double clicking on a category loads the files from that view. If you select a metadata file, its children are shown in the lower list. Optionally clicking on the Parents button displays the files' parents.
Navigation Tab Navigation View Activities