LNG shipping - A case study for autonomous optimization

Following on from my last post, we will look at a use case for trading optimization in LNG to determine where autonomous optimization could be employed to streamline the process and provide rapid insight for decision making.  Before we dive into the use case, let me recap the hypothesis of the previous posts to set the context.

  • Advances in computational power and open access to data have now elevated analytics as to the primary competitive differentiator in Supply and Trading organizations. This is achieved by deploying analytics to provide trading, scheduling, and operations with rapid insight into the best possible options for the next decision due to material change in the market or operating conditions that have caused deviation from the current plan or schedule.

  • The availability of real-time data from asset operations within and external to the operator’s asset portfolio, economic data, sentiment, weather, and other stochastic data means that correlations and covariance between linked assets or linked contractual obligations can be modeled with greater precision and used to identify and correct micro-inefficiencies.

  • The velocity and granularity of data that can influence the value of embedded optionality in an asset or contractual obligation create a challenge. A human (or team of humans) will struggle to properly assess the spectrum of optimal decisions in real or close to real-time when optimizing a large network of assets and obligations.

  • This leads us to investigate where analytics can be introduced or enhanced to assist the trading, scheduling, and operations in real-time by focusing the human decision time on the pareto-optimal set of outcomes.  These decision options would be generated by automated analytics that continually assesses the optimization problem in real-time.

  • This discussion is focused on the Energy Value Chain and the emphasis is on a methodology to define and deploy an analytics strategy.  To expose the reader on how to define an optimization problem and discuss the design considerations to enable the analytics to scale to meet business and user needs. In the context of Energy value chains, this includes how to describe the linked optionality in the value chain in order to refine the objective functions and thus boost the ROI from investing in analytics.  

  • Technology is not the primary focus of this discussion.  Too often it has been the “tail that wags the dog” in the context of analytics strategy.  The technology toolset is an important consideration in the strategy, but it should be guided by the optimization solution.  The core technology platforms currently in use across energy supply and trading organizations do not adequately address today’s analytics challenges and are not architected to meet tomorrow’s challenges. If there is a space for massive disruption to the current order, it is in the technology platforms that serve these functions in energy.  

The primary goal of this post is to inform the reader on how to methodically assess where analytics should be applied or in the case of existing analytics functions, use this framework to benchmark for relevance and business value.  A comprehensive analytics strategy puts the onus on the Business and IT to jointly determine where and when to invest in enhancing existing or deploying new analytics tools.  


LNG Cargo Trading – Optimization Bullseye Approach

LNG serves as a good use case for a methodology for many reasons.  First, it is a relatively new market compared to its peers in energy so there is a lot of content available online that one could use to model the data inputs into optimization.  Most important is the availability of contractual templates that allow one to identify and model sources of optionality.  Second, it spans the gas and liquids markets, so it has the challenges of marine logistics, with some flavor of the quality-related complexity common to refined products or crude oil.  Finally, it is not encumbered by any legacy approaches and can serve as a benchmark to assess a methodology.   

As mentioned in the previous post, the initial task in shaping an analytics strategy is to define the optimization factors that need to be considered.  Next, organize those factors (and sub-factors) into interconnected layers based on specific objective function(s).  I define an optimization factor as any variant of the below.

  • Variable data element (price, volume, etc.)

  • A calculation that represents a contractual obligation, source of optionality (the market value of a contractual obligation or asset that varies with price, volatility, interest rate and time), a penalty or reward mechanism driven by a single or set of pre-agreed KPIs or an operational construct (e.g. production yields, loss factors, product recipes).  These calculations may use actual data elements as defined above or maybe nested as part of multiple dependent calculations.

These factors are used to forecast expected return on an operational plan and are monitored to determine deviation from the forecast during plan execution.  I urge the reader to keep this definition in mind as we progress through the discussion. Consider the following classification methodology to achieve the described goals.  I term this "The Optimization Bullseye" approach:

bullseye1.jpeg

There are some natural advantages to using this classification schema as it roughly mimics the hierarchy of a trading book which, as traders will know, influences how results and KPIs are measured.  The approach also allows us to overlay the business process onto the optimization factor ecosystem, which in turn allows us to quickly identify a pecking order of where analytics should be applied (and where automation of analytics can deliver process efficiency right off the bat).  Additionally, this layered approach also allows us to determine where factors from extraneous optimization regimes, such as plant operations, logistics capacity or other analytics such as regional supply and demand analysis, weather impact analysis, etc. should be considered and, most importantly, what optimization factors they influence. 

Applying this methodology for an LNG optimization scenario involved modeling a transaction (a shipment in this case) that was loaded as part of a long term SPA with a liquefaction facility and delivered to a customer under a long term contract.  The optimization factors considered were related to the objective function of maximizing profit on the shipment.  From there, we worked methodically outwards defining the influencing optimization factors embedded in the contracts that require such shipments, and then to the portfolio of such contracts with multiple transactions either in motion, or being planned and scheduled.

The analysis also considers where external optimization activities that have an influence on the factors have to be accounted for in this ecosystem.  Example: Natural Gas Feedstock quantity and quality management impact the quality of the loaded LNG.  Plant operations optimization influences, production yields and thus volume allocated to the shipper, etc.  This served to consider the potential influences on the objective function of a single shipment.  Listing these external factors and internal factors resulted in the following:

bullseye2.jpeg



Overlaying this view of optimization factors across the appropriate business processes results in a mapping of the influence paths of said factors.   Market prices influence geographical and time spreads that influence forecasted margins.  Cost curves affect sourcing decisions which in turn influence modes of transportation and so on.  Using a weighting methodology, the linkages between optimization factors were ranked by level of influence.  Drawing these linkages on the Objective function layers informs us as to how each factor influences the optimization across the different layers.  It also quickly highlights which (and how many) factors are critical to certain functional activities.   This insight also allows us to ascertain where analytics should be applied and where automation of workflow and associated analytics could generate a superior benefits case.

To see how this works, in the use case, we followed the processes required to schedule a vessel to pick up and transport a cargo of LNG.  We added a requirement that the vessel is able to be physically capable of discharging at multiple alternative ports, as the trader may opt to divert the cargo to an alternate destination to capture better margins or be forced to diverge because of operational issues at the intended destination.  The following influence map resulted from assessing this process.

bullseye3.jpeg


The outcome helps us focus on where analytics can be applied to quickly enhance the decision-making capabilities of a particular function given the disparate amounts of data or computations needed to properly assess the optimal decision set.  Graphically this shows up when there is a confluence of arrows on a particular optimization factor.  As an example in this use case, the selection of the vessel is critical to the ability of the organization to optimize not only the particular transaction (a voyage in this case) but also to potentially monetize the optionality in the contractual obligations as well as market dislocations.  Looking at the linkages influencing the LNG loaded for the voyage, we also see how, operating constraints at the liquefaction facility, weather, and port constraints have a direct impact on costs and thus profit margin.  One can also quickly see how planning and scheduling several loading or discharging events simultaneously can make this a complex optimization problem.  Even if each event is treated as a linear workflow, optimizing over multiple simultaneous events can become complex for one person or team to manage. 

For those familiar with LNG SPA’s, the time-dependent nature of a lifting schedule, combined with volume obligations and quality constraints, changes to allocations can make the inputs into vessel scheduling dynamic and the voyage prone to value leakage through no fault of the initial scheduling process.  Adding an operational upset event increases the odds of value leakage.  Typically situations like these result in a spike in demand for scarce capacity to capture or mitigate the resulting dislocation, reaction speed becomes a competitive differentiator.  An analytics strategy should deliver agility in reacting to changes to inputs for optimization factors that result in a re-ranking of outcomes. 

An immediate use case for autonomous analytics that emerged from this exercise is creating a pecking order for vessels based on a combination of voyage needs, steaming performance (fuel usage, operating costs, etc.), historical voyage performance and port characteristics and financial performance (demurrage, penalties incurred) and service provider performance.  This is a set of analytics that can be designed to run autonomously and continually refines the vessel pecking order between defined ports of call as operational data is updated.    A side benefit of building these types of analytics is that it builds a database of performance KPIs for marine transportation.

Another example is for "milk run" voyages, where the load and discharge ports are relatively fixed, a diversion/re-load algorithm can be set to run continuously to determine, as price changes, where better-realized margin (net of diversion costs, quality degradation, and contractual penalties) is achievable.  This is available to the trader in real-time, without any need for manual analysis, to determine when it makes sense to trigger a divergence notice.  Collecting this optimization data also allows more granular look back analysis (again, some of which can be automated) that allows management to answer questions such as; When does paying up for a vessel to gain delivery optionality versus lowering cost to deliver for a transaction make sense?  Or more simply, which vessels consistently deliver high operational efficiencies?  Which jetties, if any, are preferable in Singapore because they are least susceptible to congestion?  Which Charter Party clauses are highly likely to cause demurrage, by port, by season?  These analytic databases should also allow for robust simulation exercises (scenario analysis) combining volumetric, operational and pricing conditions simultaneously.  As the reader will divine, the more analytics employed at the transaction level, the more insight and data are generated to drive increased insight from analytics employed at the contract and portfolio level.  

Even in a relatively simple transaction cycle like an LNG cargo shipment, it is evident is that there are multiple areas of optimization, some sharing the same data elements as others, some using data elements that are highly influenced or are a direct output of other optimization regimes. Using the bullseye methodology to define the problem and objective function(s) has led us to a good understanding of the linked nature of the value chain.  Harnessing value across multiple optimization agents requires innovative analytical approaches, which will be the topic of the next post.  

In conclusion, coming back to the point made at the beginning of this post, understanding the influences of transactions on operations and vice versa is really where an analytics strategy starts to build competitive differentiation.  In both the capability of people and in the technology enablers.   Current ETRM, ERP or Manufacturing Execution Systems (MES) do not possess the capability to execute such analytics.  They are constrained by their architecture to view all activities through a transactional or operational lens and more suited to optimizing through these dimensions but not across them.  In complex hydrocarbon value chains, value is created by identifying/anticipating changes and reacting faster than the competition.   This is the promise of Analytics.  The reader should suffer no illusions that it is an easy process to build a competitive advantage in analytics or once done, one can "set and forget" the analytics to run unimpeded in perpetuity.   As with any capability, it requires intimate knowledge of the business problem, supporting processes that care and feed for it along with KPIs that keep it true to its mission.

For all the stakeholders, Businesses and their IT organizations, technology vendors, and the like, this is also where executing such a strategy will require fortitude to challenge the established processes and architectures and a willingness to accept some failure and learn from it on the journey.  Too often one of the subconscious biases that constrain the ability to break out of the current paradigm of thinking is that the future is made up of the same people doing the same things, just a bit more efficiently.  What a good strategy and accompanying methodology should seek to achieve is to tear down this bias and investigate new approaches and extrapolate this to what new skillsets will be required.  I challenge the reader to sit down and spend some time thinking about this.

In the following posts, we will explore a few related topics.  As mentioned above regarding mathematical approaches to harmonizing multiple planning and optimization agents with an eye to where these have been deployed outside the Energy industry.  We will also take a look at the human dimension of analytics that was touched upon in the first post and how it is acting as a catalyst for this transformation.