This article is brought to you by: 

The scalability of systems engineering

With the resurgence of systems engineering in full swing, Guy Johns, Lead Technologist at CFMS, explores the promise of web technologies.

The emergence of new complex products requires innovation and collaboration, not to mention the requirement to deliver traceability and validation across an increasing supply chain. This creates a number of challenges, and through the use of modern IT architecture, the vision of systems engineering is to close this gap.

The discipline of systems engineering has been in existence for more than a decade, and whilst its techniques and methodologies have certainly evolved over the years, it has failed to keep pace with the level of changes that have taken place within IT during that time. These changes centre on the somewhat significant shift in how systems are deployed, set-up and, critically, how they scale.

Many of the vendors responsible for driving system engineering forward five years ago remain focused on that same desktop architecture, signalling to the industry that they are hesitant to embrace the idea that compute and storage can be scaled up on demand – an idea presented by web technologies. Within that domain, Google and the Microsoft Azure platform have had a dramatic impact on the technologies being used by companies like Netflix, while Oracle has influenced business trends by encouraging migration to its cloud infrastructure. Having taken note of this, there is a real momentum gathering in the modelling and engineering industry and it’s one that is simply impossible to ignore.

When we acknowledge the need for larger, scalable and more collaborative teams to address our complicated systems engineering tasks, we find ourselves turning to a potential solution like web technologies. The sheer scalability afforded by the web seems to make it an obvious choice, but is it a choice that is truly resonating within the industry? One barrier is the skills base, as many of the skills in web development and design have been developed around the media piece – specifically the streaming of certain services.

Another point to consider is the scalability of the methodology itself. In a previous whitepaper entitled A through-lifecycle approach to system engineering I explored the potential of systems engineering to capture data and use it to inform the next conceptual design, but there are certain hurdles that need to be overcome when dealing with large teams. Cross-discipline design optimisation in the system engineering piece often presents problems as we have a system-wide thinking approach that states the whole is more than the sum of its parts. But, by structurally splitting up a system we’re unable to conduct whole-system optimisations and this is where we step into unknown territory.

CFMS is taking traditional systems engineering tools and exploring what happens when we have a conceptual design systems model that we then roll forward to a maintenance model, and then we bring in another product which is a derivative. The question is whether we can reuse some of that previous work or if we are dealing with a new model – where do these whole lifecycle approaches join up from a methods and tools perspective? Every new design undoubtedly includes lessons learned, but they often come from the personal experience of the design engineer rather than being carried forward by the methods and tools themselves. The exploitation of those skills and knowledge is a popular topic of conversation within the industry, and yet we’re not hearing many solutions.

Web technologies are a way forward but a further challenge is the security piece that surrounds them. Despite the fact that organisations are more at risk from someone coming into their premises with a USB stick than they are from someone hacking into their system, the perception of threat is there. One of the ways of addressing this is to have a private cloud, which is something we’ve set-up at CFMS in order to gently migrate people into the environment of web scale whilst ensuring they’re confident their data is secure.

One final consideration is that web technologies are quite disruptive when it comes to their licencing model with the rise in the ‘freemium’ model where the majority of users are free users. Traditional vendors have often backed feature-based licences but unfortunately those don’t tend to scale in business. For example, a costly single seat licence that has five add-on licences for special system engineering features becomes prohibitively expensive when scaled-up to 100s of users. Not only is that a barrier to the adoption of large-scale system engineering, it provides another piece of the puzzle as web-based models are quite attractive in terms of their ability to scale.

All things considered, we do believe there will continue to exist a mix of licence models as the industry looks to powerful desktop client-rich applications. Within the next five years, this will move to local data centres and private clouds. In the next 10 years, we expect to see the majority of businesses hosting in public/private clouds – possibly within a data centre owned by the company itself, or through a partnership with an organisation, such as Oracle, offering a dedicated service. If we look at the number of developers and their skillsets it certainly seems as though computing is being pushed in this direction.

It’s an exciting time because we are seeing some definite movement towards a web-based, integrated current user system design approach. A relatively new specification called the Lifecycle Modelling Language (LML) aims toreduce design costs and enable more rapid product development by providing organisations with a structured and behavioural language. Visionaries within industry are looking at projects ranging from putting people on Mars to the development of autonomous vehicles and in order to deliver these complicated systems within a reasonable timeframe we need to unlock the full potential of system engineering.

In theory, if we can scale out across the business for any particular design point, then we should be able to move forward or backwards throughout the product lifecycle and design, and have true through-life engineering. Some methods and tools are missing but we believe they do exist in other sectors and all we need to do is bring them together and scale to the level offered by web technologies. All the ingredients are there – we just need to create that recipe. 

Feature

Sophia Ktori highlights the role of the laboratory software in the use of medical diagnostics

Feature

Gemma Church explores the use of modelling and simulation to predict weather and climate patterns

Feature

Robert Roe reviews the latest in accelerator technology and finds that GPUs and coprocessors will be key fixtures in the future of deep learning

Feature

Robert Roe finds that commoditisation of flash and SSD technology and the uptake of machine learning and AI applications are driving new paradigms in storage technology.