Autonomous Optimization of a Connected Asset Portfolio

As technology is increasingly being deployed to listen (sense), learn (perhaps more accurately –correlate conditions) and react (alarm) in physical infrastructure around us, this has profound impacts on business operations. 

Gartner identifies three outcomes of operations in an Internet of Things environment:

Business processes will be more autonomous, not just automated. As such multi-attribute advanced analytics, scenario modelling and event detection will be the only practical ways to profitably manage ever more complex supply chains.  (There is a first mover advantage for those who have superior decision speed built into their supply chains.)

The evolution to business models that rely on monetizing data will blur the line between digital and physical value chains as well as between hardware, software and services.

Business “moments”, i.e. sudden, unplanned disruptions or opportunities will require a further flattening of organizational silos to identify and exploit new opportunities.

This is not asserting that sophistication on the order of deep machine learning is what is required to manage these complex value chains. Instead, it is using analytics to continuously identify and assess operational or commercial actions available against a defined set of operator rules.  The distinction is, the analytics under discussion here assumes that the operation’s rules are set by the organization.  This is different from Deep Learning/Cognitive Computing that assumes that the machine is smart enough to change or create new rules without human intervention.

There are several examples that point at the future state of analytics in energy value chains.  My favorite is the use of analytics in talent management, particularly in professional sports.  Nowhere is the use case more mature than in American Baseball.  Begging the indulgence of the non-North American readers, performance analytics in baseball uses the abundant data collected in a game and over a long season to evaluate the performance of a player.  This allows teams to look for the best combination of talent and value, as well a craft their game strategy based on having performance data on opposing players combined with extraneous factors such as location, game time, field dimensions, particular inning & pitch count, etc.  The approach was first used by industry outsiders who mined data to understand correlations between performance and environment that led to better outcomes.  There is a book titled “Money Ball” (and movie – for those interested in the rise of performance analytics in sports) that chronicles the adoption of analytic metrics within the sport. 

Performance analytics did not eliminate the need for human decision making (or reduce the cost of top tier talent), it simply validated or challenged assumptions or uncovered new insights with real performance value.  All with a rigorous analytic methodology backed with performance data and proved itself with superior results on the field.  Not ironically, the biggest success story from the adoption of statistical analysis and performance analytics was the Boston Red Sox, who in 2004, won the World Series after an 86-year drought without a championship.  Their owner, John Henry, the legendary commodities trader, was instrumental in adopting this approach to managing the team.

In my observation, the results of legacy commercial optimization programs in energy supply chains have been mixed at best.  They historically tend to work well in commodity value chains that have low optionality related to quality and distribution parameters.  Commercial optimization in Utilities has been relatively successful for this reason (although with renewables increasingly becoming part of the generation mix, this has introduced more stochastic risk factors that need to be managed).  Scaling such an initiative to complex asset networks, like LNG, Refining or Petrochemicals presents more challenges to manage.  More linked optionality in the value chain, fewer degrees of freedom to optimize, longer supply chains and optionality related to quality and distribution tend to be the biggest modeling challenges.  Functional silos, conflicting incentives, and tribal knowledge cause process challenges. Another observation in execution, the modeling approach to valuing optionality and choice of technology platforms used to aggregate data and perform the optimization tend to pit Business and IT stakeholders in a standoff.  

As a result, short term optimization processes still tend to be very manual processes by nature requiring the time or resources to diagnose a condition and come to a consensus as to what the next set of corrective actions should be employed.  However, that is not to say that optimization opportunities are identified, resulting in incremental cost savings or revenue generation that improve system value. It is just that in a very high percentage of cases, the potential of the initial business case is rarely met.


Defining the Outcome

The focus in this discussion does not provide a post mortem of lessons learned from such initiatives, rather it is to step back and try to define the objective function of commercial optimization through multiple dimensions.  This is so we can objectively look at where (and to what extent) solving for these objective functions using autonomous technology solutions, leveraging the Internet of Things (IoT) as a source of operational data and advanced Analytics and Data Visualization as tools drive greater returns than currently achieved.  What this paper seeks to understand is whether the technologies under these defined umbrellas have the potential to overcome the typical constraints to success observed. 


Working Hypothesis

The subject matter at hand is quite large to be adequately covered in one post.  I plan to focus the discussion to specific energy value chains (natural gas and crude oil) and specific functional activities within these hydrocarbon value chains.  However, the extension of this approach for other commodity exposed industries, (food/agriculture, transportation) will be evident.

For purposes for defining the problem statement, I have used some abstraction to define the overall ecosystem in this first post as we have to appreciate the inter-relationships between the gas and crude oil value chains without getting totally immersed in the intricacies of each.  As we define a specific industry use case for commercial optimization, this luxury will diminish as the approach will require defining risk factors, constraints and objectives in more detail. To achieve the stated outcomes, one has to define a set of relationships and boundary conditions that serves as the framework for the desired optimization process under study.  First, define the “factors” that need to be considered when attempting to optimize the returns on an asset. “Factors” are generally constraints, obligations, and extraneous influences.  Under these broad categorizations of factors, one can classify sub-factors such as production volumes, contractual obligations, quality, location, time dependence, operational capacities, costs, transportation capacity, etc.  There are also factors, some that have high stochasticity (and often viewed generally as a cost of doing business).  Examples of this include price volatility, weather, sentiment, counterparty creditworthiness, demand, regulatory change, competing sources of demand or supply for the commodity in consideration, etc.

The next step is to define the dimensions that will drive the degree of optionality available to optimize.  Example:  What is the feasible forward time window of optimization? 30 days or 30 minutes?  How does the convenience yield of the underlying commodity change as it progresses in the value chain? And so on.  These dimensions will also drive the level of influence of highly stochastic factors on the optimization outcome.  But that is for a later discussion.


Abstracting the Ecosystem

As mentioned earlier, the first step is to abstract the ecosystem to make sense of competing dynamics in the natural gas and crude oil value chains.  This was an interesting exercise as it was actually very hard, not to try and model all the idiosyncrasies of each value chain.  Inadvertently, the choice of how to represent this ecosystem worked very well in reaching a level of simplicity and clarity.  The idea of using an infographic to show the inter-relationships between the energy commodities was employed to provide a good grounding for the reader.  Working with a graphic designer to build a picture of what these relationships look like, required abstracting it.  Attempting to translate the concept to someone who is unfamiliar with it, using email and instant messenger as the communications medium, necessitated clarity in the description and rigid discipline in the use of content.  In practice, this took a couple of weeks to achieve.  Not that the final product is a vision of perfection, but it works as a reasonable place to start defining an optimization problem; what it is and where it sits so we can frame a use case that speaks to the challenge. 

Energy Value Chain.jpeg

With this view, one can zoom into one particular optimization function in a value chain and be cognizant of not only what factors within that system need to be addressed in optimization, but also be aware of what external dynamics can play a role in altering the supply or demand profile of the particular commodity.



Putting the pieces together

In summary, as we have alluded to above, the necessary and sufficient conditions are present for autonomous optimization to be tested in real-life situations.  The underlying secular trends underlying technology and workforce point to autonomous systems becoming the norm in the very near future.   The abundance of relevant use cases prevalent outside the energy industry provides a source of examples that can be studied and re-purposed for use within the industry.  The definition of the problem statement should inform the mathematics and technology to be employed to solve it, then action must be taken to build prototypes to test in real-world scenarios.  Lessons learned from prototyping must be used to advance new possibilities.