Leveraging our Dataset
In the previous announcement, we defined the concept of task bidding using durations. The bidding mechanic provides a system for resolvers to propose their assignment to tasks in Moonlight.
Bidding represents the expected time for the resolver to complete a task in 'real time'. There are various methods for task duration (or value) estimates and bids across the conventional project management space.  Unfortunately, bid accuracy is relatively volatile and is highly correlated to subject matter expertise and task duration, which can provide a substantial challenge for project tracking. A common countermeasure is to make a 3 point duration estimate (high, low, expected) for use in scheduling. This data can then be used to calculate a 'project buffer' and a "project manager buffer" to account for schedule volatility due to estimation inaccuracy.  In Moonlight, we still recommend a buffer, but propose that it be minimized due to a reduction in the error associated with estimation accuracy that our system provides.
Figure 1: To reduce the estimation error, we leverage the historical task data that organizations have accumulated on the blockchain.
Skills required for a task are logged along with the resolver's bid. This data is used to provide the issuer with an accurate task duration for each potential resolver who has cast a bid. Because Moonlight logs task skills in addition to bid accuracy, the prediction can be skill specific.
This introduces a few system complexities:
Users looking for tasks without any experience in the desired skill-set will have difficulty using the task marketplace without accommodation by the system. Moonlight will handle this situation (which all incoming users will experience) using a number of different mechanisms:
- The marketplace will not prevent users from finding and placing bids on tasks for which they do not have the required skills. This deregulation allows resolvers to place extremely competitive bids on tasks as a mechanism to offset a lack in experience.
- Moonlight will support unvalidated (off-chain) resume content to represent experience which occurred outside of our ecosystem.
- The Moonlight project will form strategic partnerships with external qualification verification platforms to allow users the ability to build their validated qualifications without prior experience in the Moonlight ecosystem. These topics will be covered in additional detail in the white-paper. Strategic partnerships will be presented on the Moonlight website as they become available.
When bidding on a task, a resolver is estimating the time to completion of the task. If a historical dataset is limited or disagrees, there will be issues with projecting the expected completion time of the task. In this scenario, the platform may choose to provide the bid as a pass-through estimate for the duration of the task with a notification to the issuer that there is not enough data to provide an improved estimate.
As resolvers work within the system, the estimation accuracy dataset becomes a mechanism for personal improvement. Resolvers are able to review their estimation accuracy data when bidding on tasks as a mechanism to improve the accuracy of their bids. This introduces a bias into the estimates which allows resolvers to self-correct for their bid inaccuracy. Fortunately, this bias can be reduced by accounting for transient changes in the estimates.
Users of the system are expected to have their estimation accuracy distributions asymptotically approach an expected value of 1 for skills which they are actively developing. Error associated with estimate precision is never expected to be removed from the system, but can be substantially reduced by clear task definition by the issuer.
Figure 2: Organizations can review their skill-specific estimation accuracies as a mechanism from improvement over time.
Task Definition and Tracking
In Moonlight, issuers have the ability to assign task dependencies which allows for complex task structures. Remember that each individual task can be made up of other tasks. The combination of these two task properties provides both scalability and range in degrees of detail on a project. For example, a project owner may have 3 tasks with dependencies, within which, a number of other tasks are defined.
Figure 3: Tasks defined in Moonlight can optionally be assigned relationships to other tasks to form a network diagram.
As bids are received from resolvers, organization specifical task completion data is mapped onto the bids. This results in a 'corrected' expectation of task duration for the issuer to use when tracking task progress. If multiple bids are received on an individual task, the issuer may review each bid to select the most oppropriate for their project needs.
Figure 4: Bidding occurs on the defined tasks through the marketplace. As bids are made on tasks, the bid accuracy is mapped onto the bid to provide the issuer with a model of their project for tracking purposes.
By running a simulation on the model, we are able to project a distribution for the expected completion time of critical task milestones as well as the expected task completion time.
Figure 5: A burndown plot depicting the historical progress on a network of tasks (in red) as well as a the results of a simulation, predicting when the tasks will be completed.
The Moonlight platform provides a number of mechanics for issuers to use for task optimization to meet their needs. The use of these optimization methods will depend on the type of tasks being issued in the system.
Another use of the historical data can provide issuers with expectations of their tasks prior to presenting them for bid in the marketplace. By referencing the skills and content of the task, Moonlight can yield recommendations to the value to assign a task as well as the expected results. These estimates are highly dependent on proper task documentation and may not be available for all tasks.
Task Moderation Through Bounties
In Moonlight, issuers have the opportunity to stake a review bounty on tasks. If a bounty is staked on a task, other organizations are allowed to review and propose enhancements to the task (through enhanced documentation, clarification requests, value modifications, and required skill-set changes) in return for a portion of the bounty. Reviews can take a number of different forms including a direct review request from a specific organization.
Staking Tasks as Insurance
Occasionally, the method of payment for work completed on a task may be contingent on the completion of other tasks. An example is a task which includes an ICO where the payment is in the form of the issued coin. In this scenario, the task issuer may stake the project with another form of payment. If the project is successful, the payment in the issued tokens is received by the resolver. If the project fails to issue tokens within a time-frame defined by the staking process, the staked tokens are used for payment instead. This mechanism provides a level of insurance to resolvers and entices contributions on new ventures.
Figure 6: Issuers have the ability to stake tasks as a mechanism for providing insurance on tasks which have funding risks.
By supporting this functionality, the Moonlight system provides a mechanism for effectively crowdfunding projects and minimizing the risk to resolvers while also providing a high degree of project visibility.
Multiple bids provide the platform with some fidelity to optimize projects based on organizational preferences. For example, an organization may wish to minimize the expected duration at an increased cost, reduced precision of the completion date, and quality of results. Because individual task assignments are not locked until assigned to a resolver, the issuer is free to continuously optimize their tasks as external factors change.
Figure 7: Multiple bids provide the resolver with options regarding task staffing.
Figure 8: Multiple task bids allow the resolver to evaluate the impact the the task schedule as a response to their selection of a resolver.
A Resolver's Perspective
Resource allocation is an important feature for organizations to monitor. Allocating to too many tasks can result in schedule slip which will impact the organizations estimation accuracy (as well as their reviews for those tasks). Because of the complexity of the scheduling and matchmaking systems, we provide tools to simplify the monitoring of allocation for resolvers into an easily digestible format. This functionality also exposes functionality in the marketplace where resolvers are presented with tasks that meet their search criteria as well as how a bid on those tasks will impact their utilization within the system.
Task estimation accuracy improvement
Moonlight provides a feedback mechanism to improve bidding estimation, by resolvers, by allowing organizations to review their estimation accuracy for their task history as well as specific skill-sets.
The Economics of Match-Making
Task Values and Bids
The Moonlight economy is built on the concept of task values and task bids. Fidelity in these two attributes can provide both an entry point for incumbent resolvers as well as a competitive market place for seasoned ones. We define a few of these mechanics below:
By applying an elevated value to a task, issuers can entice additional bids from resolvers. These bids allow the issuer more options when optimizing their project. Additionally, a higher value may provoke bids from more experienced resolvers in addition to yielding more aggressive bid durations (implying a higher % of time allocation from the resolver).
By providing a low value on a task, an issuer may receive fewer bids, but can focus on minimizing the project cost. Assigning a low value to a task does not imply that a experienced organizations won't place a bid. An experienced resolver may bid on multiple low-value tasks and only spend a fraction of their available time on each. When doing this, they are incentivized to provide an accurate bid representing when the task will be completed, which would be longer than if they were fully allocated. This provides an opportunity for less experienced organizations to provide more competitive bids to the task.
Bid Accuracy Distributions and User Reviews act as controls to regulate unreasonable bidding in the market place. Resolvers can very quickly destroy their reputation in the system by presenting unrealistic bids and being unable to deliver.
Note: Prior to task assignment to a resolver, issuers are free to manipulate task values at their discretion.
In Moonlight, users have the ability to view the task bids of other users in the ecosystem. This provides task competition. We borrow a term from the conventional project management lexicon: 'crashing', to define the act of a resolver rebidding on a task with a more competitive estimate (which would imply greater resource allocation).
Example Use Cases
- A mid-sized company (1000 FTE) has issues with resource silos which puts business and employee growth needs into conflict despite willingness to occasionally hire contract labor for support. Op-Ex is a continuous struggle and there is constant corporate restructuring to meet changing business needs. The company maintains a substantial amount of employees with mission-critical skill sets and on-boarding is an expensive endeavor. The company chooses to use Moonlight primarily as an internal software platform for project tracking which enhances fluidity of project and resource matching. At the same time, contract workers are easily brought online if a skill desert arises at a substantially lower cost due to the guarantee of their experience. Employees are also content because their skills are logged into the system to automatically build their resume (even though some skills may be encrypted by the employer to prevent the escape of sensistive content).
- A startup requires limited access to specific skills for a project and needs to understand the impacts of the resolver on the project schedule. Additionally, their budget is extremely tight so they cannot afford the financial burden of increasing headcount or working through conventional hiring platforms. They choose to use Moonlight to provide flexibility and seamless integration between the full-time staff and external resolvers for project racking. Hiring overhead is also reduced because the experience of the resolvers is guaranteed by the system.
Project Bounty/Spontaneous Crowdsourcing
A group of individuals have defined a new blockchain project concept using a utility token that they believe will be successful in the market, but do not have the skill-set required to actively manage or deliver the product. They form an organization on Moonlight and issue a task requiring program management and blockchain experience with payment in the unminted system token. They choose to stake the project with an amount of Neo as insurance in case the project collapses. A program manager bids and wins the task. They use the Moonlight platform to define the project architecture and tasks, manage the development of the project to completion. The project is successfully launched within the staking guidelines and the resolver is compensated in the utility token.
In a similar scenario, a group decides to crowndfund a project on the Moonlight platform, but define task stage gates where compensation is distributed. Using this mechanic, the funding party can guarantee that the team is delivering as expected. Additionally, the tasks provide extended visibility into the project status.
Gig Economy Example
- Individual Using a gig economy model, a user can freely issue and resolve tasks within the Moonlight ecosystem as a freelancer. This functionality is similar to existing platforms with exception of the trustless resume on blockchain and improved task duration estimates.
- Moonlight (SaaS) A gig economy platform may interface with the Moonlight API to take advantage of the trustless resume and match making functionality, but may leverage external features (like gps tracking or extended identity verification) to meet the needs of their products.
Note: Schedule is tentative and subject to change.
- January, 2018: - Neo Developer Conference, San Francisco
- February, 2018: - White Paper Release
- Q2, 2018: - Token Sale
 Herroelen, W., Leus, R., & Demeulemeester, E. L. (2002). Critical chain project scheduling: do not oversimplify. Project Management Journal, 33(4), 48–60.
 Radigan, D. The Secrets Behind Story Points and Agile Estimation. Atlassian; Agile Coach.
 Sliger, M. (2012). Agile estimation techniques. Paper presented at PMI® Global Congress 2012—North America, Vancouver, British Columbia, Canada. Newtown Square, PA: Project Management Institute.