Hidden Insights

Analyst as a mediator to get value from analytics

Architects are a key part of any major building project. They draw up the plans, and in some cases, may even be involved as project managers. But they may have another role that is not often discussed: acting as mediators in any disputes between builder and client. An architect is, in fact, ideally placed to mediate because they understand the language and the issues involved, but are seen as neutral by both sides.

Analysts can play a similar role in bridging the gap between IT and business, especially in setting up new analytical models.

SAS Guide, SAS Tutorials and Materials, SAS CertificationsGetting value from analytics

You can only realize value from analytics when you act on the insights you have gained. This can be as simple as using a report that provides important information, up to deploying advanced analytical models in real time in operational systems such as a customer-facing web page, a cell in a network or a call center system.

Trends like IoT, big data analytics, AI and machine learning both allow us and force us to use analytics in a way that we have previously only been able to dream about. But we need to be able to deliver not just every month or even every day. Instead, we have to be able to act at the right moment, and in an IoT-enabled world, that can mean within milliseconds.

Modern tools allow us to automatically train multiple models to manage specific business problems, across different segments. This can result in hundreds of models, all of which must be run to get value from them. When discussing this in the field there is almost always a combination of a lot of interest with fear about how to get this to work in practice. This seems to separate into a business–IT split:

Business

Training hundreds of models, being able to use data I couldn’t access before, using modern machine learning techniques and doing this fast is a real game-changer. But to be able to get value from this, we need to put it into production and IT is usually slow and not very helpful. Getting this into production will take 6 months at least, and by then the models will be outdated. Each model running in production also adds to my workload. I will need to evaluate it regularly to see how it performs against actual outcomes, so that I know if it needs retraining, or replacing with a challenger model instead. I usually try to avoid that, because it might mean going through some of the IT processes again.

IT

New technology allows us to build scalable clusters. This means we can get the performance we need while still keeping the level of security from redundancy to achieve the required uptime and agreed service levels for the business. However, with our business side, this is really difficult. Everything is urgent, they never have clear requirements, the quality control is lacking to say the least and when we get to implementation, the scope changes constantly, so it is impossible to plan.

Why is this so common? One reason might be that the two sides have different priorities and responsibilities. The main goal for the business side is to reduce time to market, improve sales, create better customer understanding, and develop new products. The IT side, however, has the responsibility for assuring quality, making sure that the promised systems perform properly in terms of timeliness and reliability, but also keeping the costs down as much as possible.

So how do we improve this?

Most of the time required by IT to deploy a new model is connected to understanding the requirements, development of the model (managing data flows, model execution logic and integration) and testing and quality assurance. This enables them to be confident that what is deployed will be able to support the required uptime and timeliness. The deployment process usually includes a requirement on development time as well as ongoing monitoring of the model once it is in production.

Analysts, however, can simplify this process, by bringing the two sides together and managing the interface in several different ways.

SAS Guide, SAS Tutorials and Materials, SAS Certifications

Requirement specification

The result of developing an analytical model is usually the algorithm or the parameters needed to be able to use the algorithm for new data. This is often called score code. The requirement is a combination of the score code plus the required input data. To be able to train the model, its developer must have the input data. However, if using new data sources, the modeller usually needs to do some work preparing the data before training the model. The score code is generated as a result of the modelling process. Since the modelling process results in code to score models and to create the variables, this can be used to simplify the requirement specification. The analyst can help both sides to understand this, and actively reduce the time required.

Adding an innovation lab for prototyping models also reduces the need for scope changes during implementation. It will be much faster to prove or disprove the value of different models/data soures using a lab, not least because there is no need to keep models separate from established systems. The big question is usually deciding on a process so both sides know exactly what to do and include, minimizing the risk of misunderstanding, and here again, the analyst can help to bring the sides together.

Development

Data flows

If the model needs new input data, data flows need to be updated. The types of data flows can range from nightly batches, in-stream data integration or parameters to REST interfaces. When developing a model, it is not uncommon for the data to need additions or transformations. The modeller needs this new data to be able to train the model, so the transformation work is usually part of their work. The amount of time needed to build the data flows can be greately reduced if the modeller is clear that they will need the data preparation code as part of the requirement specification. Although it may need work if there is a change in data sources or to optimize the performance of the existing code, this is possible if the platform running the data flows can run the same code and access the same data sources as the modeller’s platform.

Model execution

In a lot of cases, this process involves taking the generated scoring parameters and rewriting the score code to a production system (in another programming language). This is a time-consuming process and introduces additional sources of errors. Unlike the data flows, the score code usually does not need optimizations and rewrites. Having a production platform able to execute the score code from the modellers will therefore allow the model execution logic to be placed into production without any development work, although an approval process is still likely to be needed.

Integration

The actual integration to the system that will execute the model is often outside the scope of the deployment process. However, the time required for integration will also affect the deployment time if the deployed model is planned by a new system. The integration time can be reduced by allowing the data preparation and execution logic to be accessed in multiple ways such as REST/SOAP APIs, file integrations, and message queues, so it can easily be integrated with existing systems. The analyst or modeller can ensure that these are built in to the model as part of the development process.

Testing

Everything needs to be tested, alongside possible integration options. Having more parts involved in the deployment increases both testing time and complexity. But many of the integration options involve the same systems, data flows and analytical algorithms. The actions to reduce the development time will therefore also reduce the time needed for testing.

Evaluation

Even though it is not part of the actual deployment process, it is important to include evaluation. The models will need to be monitored and perhaps even retrained. To reduce the time required for this and therefore increase the number of models that can be run in production requires automation. This may include automation of evaluation of the model’s performance (how well it has predicted actual outcomes), alerts for poor model performance (or when a challenger model is performing better) and in some cases, automatic retraining.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s