At Propylon we work with clients across the legal and regulatory sector. From US state-level government to large multi-nationals. One of the interesting things I've come to find in these organizations is how similar their challenges are to those encountered by programmers and engineering teams.
In engineering teams, we invest heavily in tooling to enable us to effectively manage and review large code bases, with teams often distributed around the world. We implement continuous delivery and continuous integration patterns and pipelines to ensure we can quickly implement, build, and release software updates in the most efficient way possible while adhering to our own quality standards.
In the legal and regulatory sector, teams have the same need - to manage and release (publish) large content sets in an efficient way ensuring quality and accuracy are maintained. These content sets are often large policy manuals, each of which implements a set of regulatory guidance or requirements. In the past these content sets would have been distributed as physical books, but more recently they have been digitized and distributed as PDFs or other digital formats.
Often these content management teams encounter publishing challenges because their process and tools are built to support a legacy model - one which is optimized for book publishing. Engineering teams have experienced similar challenges in the past when distributing software via physical mediums.
In content publishing, which in many organizations has evolved from book publishing, we see a large up-front effort in developing the content followed by a large publish event. Iteration is slow and expensive and issues are hard to fix after the fact. Because of this, significant effort is invested in ensuring content is heavily reviewed - double and triple checked for errors before finally hitting publish.
Compare this to software delivery. In the software world, especially in the context of the web, the processes and tools are designed to allow for on-going changes and updates. There is a low upfront effort, with many publish events. Iteration is constant and accounted for in the core processes. While there are still quality measures in place such as code review and testing, any issues identified after release can often be fixed in a matter of minutes.
In my experience in working with these teams, many of the challenges being encountered can be solved by implementing the right tools and processes to enable a model of continuous delivery. That is, removing the large, infrequent publishing events, and instead adopting a more frequent, rolling release pattern.
Some key considerations when adapting your tools to support a model closer to continuous delivery:
Build in flexibility
It is very rare to have a single process that meets the needs of the team all of the time. It's important that the tools you use to manage and publish your content allow flexibility in the workflows to account for the scenarios not captured by the core processes. No one wants to end up in a scenario where tooling is blocking a necessary action because of an oversight in workflow implementation.
Provide the right information at the right time
The tools used to manage, review, and publish content should surface the relevant information at the right time to enable users to get on with the job at hand. Every minute spent looking for information on what happened to a piece of content since the last review, or trying to find other cited or referenced content is a minute that the user is not doing what is required of them in that moment, be it a review, writing a report, publishing, etc.
Consistency and reliability
One of the most important things in enabling efficient publishing/release process is building confidence in the tools used to facilitate that process. When you don't trust the system to catch mistakes, manual processes are implemented to provide a safety net. Over time, those processes become the bottlenecks for releases.
The tools in question should record everything that happens to the content throughout the workflow. For any given piece of content or individual change, the system should provide answers to questions like:
- Who made this change?
- When was it made?
- What was the context?
- Who had reviewed it up until that point?
- Were there any comments added during the review stage?
It should be crystal clear how content has ended up in its current state.
The audit trail adds a safety net - it demonstrates exactly how your content has come to be in its current form, who has had eyes on it, and who has approved it. It takes the assumptions out of the process and enables fact-based decision making.
It also enables rollback of change. If a change ends up being problematic or incomplete, it is very clear what exactly needs to be reverted - inspect the audit trail and revert the relevant entries. Of course, reverting changes should also be recorded!
One other benefit of a detailed audit trail is it enables retrospective analysis of the entire process. It demonstrates how long each stage of a workflow took - how much change arose from the review stages, how long between final review and final approval, etc. These insights enable team to adapt their processes to remove roadblocks and introduce efficiencies which they otherwise may not have thought to implement.
In summary, the tools used to manage and publish content play a huge part in how efficient the end-to-end process can be. Whether that content is source code or legal and regulatory content, adopting the right tools can enable teams to move towards a continuous delivery model. The benefits of which are reduced turnaround time for changes, simplified workflows, and reduced costs.