Understanding The Factors Impacting Crude Oil Prices
In 2020, the combination of a huge run-up in crude supplies fueled by the Shale Revolution and the COVID-induced severe crude oil demand destruction, resulted in negative crude prices (WTI was trading at close to -38 $/B). At the time, the cliché of “nowhere to go but up” was precise and appropriate. It was a matter of time before the price of crude would rebound and the U.S. and the rest of the would be producing enough oil to meet their needs. Crude prices returned to $40/bbl after production curtailments and an OPEC/other countries production-cap agreement that went into effect in May 2020.
Global and US refinery demand outpaced supply (production + imports – exports) by the third quarter of 2020. This allowed the drawdown of the massive inventories that had accumulated in tanks in Cushing, the US Gulf Coast, and worldwide, plus floating inventories stored in anchored VLCC tankers. Despite the COVID health and economic crisis, the fundamentals prevailed and prices rose, as demand increased faster than supply due to slow production growth. This has enabled the global market to draw down the stocks that had built up at a neck-breaking pace in the first and second quarters of 2020. How quickly the gap is filled as we approach normalcy depends on the propensity of Exploration &Production (E&P) companies to invest in drilling at a higher pace.
Concerning production, the recent increase in rig count has been driven by private E&Ps, not only to enable them to benefit from higher crude oil prices but to make themselves more attractive takeover targets. In contrast, many publicly held E&Ps are facing ESG-related pressures from investors and have reined in their capital spending, returning more money to shareholders (via stock buybacks and dividends), and directing more of their cash flow into their bank accounts. This reluctance of many publicly owned E&Ps to invest more in drilling, even in this environment of higher crude prices, has been a significant factor in slowing U.S. production gains.
While crude oil prices over the long term reflect supply/demand fundamentals, there’s a lot more at play in today’s market — everything from COVID outlook and indicators of economic recovery; to OPEC and other producers’ strategic moves. We can't ignore the factors involved in production levels, cash accumulation by publicly owned E&Ps, Environmental, Social and Governance (ESG) factors, WTI futures, the forward curve, oil-market sentiment, the capacity of producers to ramp up, and momentum.
Where the pandemic and the economy might be headed next, is for the world to find out. Our view is that we’re past the worst, and demand for refined products like gasoline, diesel, and jet fuel will continue growing in 2022 & 2023.
In our next article in this series, we will discuss the concerns and issues surrounding the environmental, social and governance (ESG) requirements for Oil & Gas Companies.
Perfecting the Shift Change
In any environment with a constant flow of work, a smooth transition from the outbound shift to the inbound shift could be the difference between undisrupted output and a disaster waiting to happen. In hospitals where errors directly impact the healthcare outcomes of patients, it is imperative that the changing of shifts is carried out with absolute efficiency. The best healthcare providers focus on the following three principles when implementing shift change protocols.
- Standardize transfer of information: There is no room for ambiguity and interpretations when it comes to communicating patient status. When the outbound Nurse handover the care of their patients to the inbound Nurse, the patient’s health record should be in one consistent format such as a template or checklist. This reduces variability in communication and ensures that the right information is passed on each time.
- Pre-stage medical equipment: To ensure that the incoming shift is set up for success, the outbound shift could take some time to inspect and stage medical equipment. By doing so, we ensure that the equipment is in working condition, in the right location, and inadequate supply. Once the incoming shift arrives, they should be able to take over the work without having to spend time performing non-patient care tasks. Think of a Formula 1 pit stop. If the pit crew only started looking for tires once the car is in the pit, they would have already fallen hopelessly behind.
- Active management: We all look forward to the end of a workday but in a hospital environment, patient conditions can unexpectedly change, and we must remain alert. It is common in any work environment to see productivity slow down towards the end of the shift. Are there tasks that can be performed to ensure that the incoming shift is set up for success? Are there patient care duties that would benefit from having an extra set of hands? Questions like these allow managers to reallocate excess labour and ensure that productivity does not drop as the shift comes to an end.
A smooth shift change is indicative of a highly coordinated healthcare facility. When it comes to patient care, there is no room for error. By perfecting the shift change, we can reduce variability, alleviate the stress of medical staff, maximize productivity, and provide a positive patient experience.
The author of this blog, George Xu, is a Senior Consultant at Trindent.
Addressing Performance Variability
Outdated Process Series
Previous articles in this series discussed the challenges of identifying outdated and inefficient processes. This article will concentrate on performance variability as one of the indicators of an inefficient process and will look at some ways to effectively address it.
When it comes to business, variability is generally not welcome. It prevents us from creating accurate forecasts, makes us keep extra inventory on hand, leads us to engage more resources than we need, and thwarts long-standing plans. In other words, variability causes inefficiency.
Many kinds of variability are unavoidable but can be managed with analytic and quantitative approaches, like the Erlang staffing model. There are, however, types of variability which can and should be controlled. Performance variability, for example, which can have a very significant impact on efficiency and profitability, squarely belongs in the controllable category.
What is Performance Variability?
Performance variability occurs when different levels of efficiency are observed among similar roles or processes under similar conditions. This could refer to employees of the same skill level performing very differently, or similar processes yielding significantly different results, or two or more identical pieces of equipment showing varied throughput and/or quality.
Identifying Performance Variability
The first step in identifying performance variability is recording the variance in performance through observation or analysis. Next, any and all external factors which could cause the variance in results should be accounted for.
Once external factors are ruled out, and it’s established that conditions are, in fact, similar, analysis should be redirected to the root causes of the variability. A closer and more detailed look needs to be taken at each individual, process, or piece of equipment in question.
Addressing Performance Variability
When dealing with equipment, performance variability is straightforward and can usually be corrected by repair or replacement, and then avoided in the future with a proper maintenance program. Performance variability involving people is far more complex.
Let’s look at some root causes of employee performance variability, along with some actions that managers can take to rectify the situation:
- Difference in skills: Sometimes it’s assumed that two or more employees have the same skill level, but no formal skills evaluation was ever put in place. By introducing tools like an employee skills matrix, it’s possible to verify skillsets and address any gaps with training;
- Unclear expectations: Often, manager don’t clearly communicate expectations to front-line employees. Providing managers with a certain amount of coaching and setting KPIs used in active management usually resolve the variability
- Fear of losing service levels: This is prevalent in environments like call centers, where the drive to meet service levels comes at the expense of everything else. Coaching for managers and front-line staff, as well as smart KPIs can lead to a philosophy shift that resolves the variability by acknowledging that excellent customer service can be offered in an efficient manner.
- Opportunistic behavior: Sometimes the hurdles to employee performance are far more individual, like a personal problem or a lack of underlying motivation. Intervention by a good active manager can identify and rectify this cause of variability and help put performance back to desired levels.
- Lack of fit: Once all other options are exhausted, the only answer left is usually lack of fit. To resolve this, management can consider reassigning a particular resource to a different role, or redistributing responsibilities to other resources.
Identifying and addressing performance variability in people can be complex, and often requires an external and experiences perspective to effectively evaluate processes, systems, and behaviours.
Click here to find out more about Trindent’s approach and how we can help your organization address performance variability.
Working Capital Optimization: Safety Stock
When it comes to inventory management, companies often underestimate the importance of safety stock and the need to maintain optimal levels. While it may seem like the path of least resistance from a service level standpoint, holding excessive inventory stock level leads to suboptimal working capital management and, as a result, hurts profitability by curbing one of its key components: capital turnover, as Trindent Consulting has seen on medical devices inventory improvement projects. Although determining optimal inventory level is challenging for several reasons, there are ways of optimizing it without disrupting the chosen service level.
Calculating Optimal Stock Levels
Depending on the business realities and the type of inventory management (Fixed Order Quantity or Q-model or Fixed Time Period or P-model), the two main questions of inventory management are when and how much to order. The answer to both questions must include two components: covering demand between the reorder periods, and safety stock to account for demand variability. If it were not for demand variability, the decision on the size and timing of orders, and therefore inventory levels, would boil down to a simple exercise in Economic Order Quantity (EOQ), which balances ordering costs with inventory holding costs to arrive at the optimal frequency and size of ordering based on stable demand. In this perfect scenario, we would boast 100% in-stock and fill rates and the minimum inventory determined by the EOQ calculation.
However, there is always demand variability and the desired service level to take into consideration. Service level is particularly important in the healthcare industry, where it’s not only desired but required.
To show the logic behind inventory management, let’s look at the Q-model. Let’s assume we are dealing with a product for which the expected demand is Dp, for the period in question P. The lead time for the delivery from the supplier (or warehouse) is L. Let’s also assume that the demand is random and follows a normal distribution (which will not always be the case) with the mean µ and standard deviation of σ. We can then calculate stock quantity S, which we should be attained. As we will see below, the stock quantity will contain a sufficient amount of product to cover demand for period P+L plus the safety stock, which will depend on the service level we will choose to pursue. The formula for calculating S is as follows:
S=D_(P+L)+z*σ_(P+L)-I
where:
D_(P+L): is the demand for the period P and the lead time L, and can be calculated as D_(P+L)=µ*(P+L)
z: is the number of standard deviations in normal distribution corresponding to the service level we choose
σ_(P+L): is the standard deviation of the demand for the period P+L
I : is the inventory on hand at the time of ordering.
Notably, z*σ_(P+L) is the safety stock portion and depends on the variability of the demand, represented by the standard deviation, and the chosen service level, represented by the number of standard deviations corresponding to it.
Choices and Challenges
While calculating safety stock using this framework seems like an easy exercise, there are several challenging choices that need to be made, including determining the properties of expected demand when past performance may not be a good indicator of the future. This may bring the mean and the standard deviation of the normal distribution into the spotlight, and sometimes even the distribution itself may not be normal. Another challenge is lead time variability. While the formula above assumes lead times are stable, there may need to be an additional level of analysis to arrive at a proper value for it.
The main challenge, however, is choosing the necessary service level to meet. The most calculated approach to determining service level follows the same logic as EOQ determination: by counter-balancing costs of overage (or holding costs, or h) and costs of underage (also known as backorder costs, or b), aiming to minimize both. Holding costs include the cost of capital, storage, obsolescence, and other costs related to holding inventory. Backorder costs include the lost profit from transactions that do not happen due to stock-out situations. It is, however, very common to attribute additional meaning to backorder costs in an attempt to evaluate other negative effects of a stock-out, such as reputational cost and the effect on market share. Since we measure service level as the probability that stock-out does not occur (Pr(DP+L≤S)), the optimal service level can be reflected as follows:
Pr(D_(P+L)≤S)=b/(h+b)
As we can see, depending on the value attributed to backorder costs (b), stocking out may well be decided as prohibitive.
In addition to properly calculating necessary stock levels, there are several strategies to minimizing inventory without affecting the critical service levels. For example, organizations often apply the same calculation to all the products in their roster, so products that are very important may dictate the service level for the entire list. By a simple stratification of products into A/B/C tiers and by assigning appropriate service levels to each, considerable savings may be attained.
Click here to find out how Trindent Consulting can help your organization optimize your safety stock.
Six Sigma: What’s Needed to Succeed
Six Sigma, the popular methodology for process improvement, is a statistical concept that identifies the variation inherent in any process. By subsequently working to reduce these variations once it defines them, the Six Sigma methodology diminishes the opportunity for error, thus reducing process costs or increasing customer satisfaction.
The core objective of Six Sigma is to implement a measurement-based strategy that focuses on process improvement and variation reduction. At a high level, this is accomplished through the use DMAIC, an improvement cycle (define, measure, analyze, improve, control) for existing processes that lack efficiency, and the statistical representation of Six Sigma, which describes quantitatively how a process is performing.
There are, of course, many proven methodologies an organization can consider for process improvement. So, why should they use Six Sigma, and what do they need to make sure they succeed?
When to Use Six Sigma
Given the similarities between continuous improvement methodologies, it can be difficult to determine which one is right for a given situation. To help organizations make that decision, the Six Sigma Council outlines the following scenarios, and the benefits of Six Sigma can bring to solving each one.
When facing the unknown – A process is operating out of control but the problem causing the deficient output is not known.
Six Sigma looks for potential causes and using sigma level calculations prioritizes them. It then sets up the framework to resolve the causes and get to a solution.
When problems are widespread and not defined – The problems in the process are known and understood, but the scope of the solution is not defined, leading to constant scope increases and lack of viable solutions due to their unmanageable size.
With control measures in its methodology, Six Sigma stays clear of unmanageable scope escalations in favor of incremental improvements over time.
When solving complex problems – A problem with many variables causes a complex process, where it’s challenging to identify an approach, definition, and measure for a successful outcome.
Due to its statistical basis, Six Sigma can handle problems that contain large amounts of data and variables, deciphering them to give hypotheses, premises, and conclusions to base changes on.
When costs are closely tied to processes – A process that has a high cost risk due to its very small margin of error – where one incremental change can translate to millions of dollars of loss or gain – requires solution accuracy before implementation.
Six Sigma leans on its statistical process control to create assumptions, therefore when implemented properly, this method is significantly more accurate than its alternatives.
Success in Six Sigma
The Six Sigma method is not without its challenges, of course.
To be successful, Six Sigma requires support – primarily in the form of resources and data – at all levels of an organization. Adequately staffed engagement teams with necessary levels of subject matter expertise are a must for positive results, as is access to consistent and accurate data streams to enable calibration factors and the capturing of necessary KPIs – crucial to data outputs value.
But ultimately, taking advantage of how customizable this approach is to fit your industry and organizational needs will be the key to successful process improvement.
Is It the Right Choice for You?
When starting on a process improvement initiative and considering the Six Sigma methodology, it is important to have all the information first. Knowing when this method is best applied can set you on the path to operational process perfection.
The Six Sigma methodology has been adopted by top operational excellence consulting firms, including Trindent Consulting. Click here to learn more about how we can work with you to utilize this valuable tool in driving the efficiency of your organization.
Don’t Forget Your Management Operating System
Outdated Process Series
Previous articles in this series discussed the challenges of identifying outdated and inefficient processes and what tools are needed to successfully implement business process improvements. This article will look at one other important component that must be taken into consideration when implementing change: your Management Operating System.
The Importance of Your Management Operating System
The Management Operating System (MOS) is a set of tools or structures that allow for measuring, controlling, and managing a process, an operation, or a company. The main purpose of MOS is to provide management with visibility that empowers their decision-making process.
MOS elements are usually divided into sub-categories which are dependant on the level of management and outlook they support, starting with business direction – the longest and most strategic outlook which requires the most sophisticated MOS – down to execution control elements, which capture real-time data about current activities.
Operating a business without a robust MOS is akin to walking around blindfolded: you may be able to get somewhere, but it will take you much longer to get there, and you may end up in the wrong place altogether. Add to that the possibility that your competition is not blindfolded at all, and the result is not hard to predict.
Typical MOS Pitfalls
While the accuracy and quality of MOS varies greatly between organizations, many fall victim to one of these common deficiencies:
- Poor design is the most common pitfall, and results in an inadequate number and functionality of system elements to provide sufficient or accurate insight;
- Gaps created when MOS elements necessary for a given levels of management are either missing or insufficiently robust;
- System tools are not properly linked with one other and aren’t able to work together to paint an accurate picture; and
- Excessive or unnecessary elements are built in, and act as a distraction from key information, reducing the overall effectiveness of analysis.
The Gold Standard
While MOS elements will differ depending on the industry and on a particular company, the principles of its design remain the same. The best Management Operating System should give just enough of the right information to steer all aspects of the business, without clouding insight by generating inaccurate or unnecessary information. The data should be presented in a manner that promotes effective analysis and decision-making, and every element of the MOS should be used and useful in evaluating whether the company is on the right path to value creation.
Having an optimal MOS is a key part of eradicating outdated processes from your organization. However, the challenge of objectively evaluating your systems may get in the way. Click here to find out how Trindent Consulting can work with your organization to find and overhaul your MOS limitations.
The Path To Turnaround Time Improvement
We’ve all used John Ray’s adage, “Haste makes waste” to demonstrate that doing something too quickly causes mistakes and results in the waste of time, effort, and materials. And we’ve all experienced situations where we were asked to “work faster” only to subsequently face quality problems.
Similarly, many organizations who strive to eliminate waste often go about it the wrong way, making mistakes at the expense of Turnaround Time. At Trindent Consulting, we specialize in helping your teams accelerate work while maintaining quality and reducing waste.
Be Little to Increase Speed
Turnaround Time is the amount of time it takes to complete a process or request. It’s made up of Lead Time (the time between the initiation and completion of a process), Non-Value Time (time spent on a step in the process that adds nothing to the finished product), and Value-Added Time (time spent that improves the outcomes of a process), and has a direct impact on labor efficiency and cost.
Companies frequently address slow Turnaround Times by focusing on the average completion rate of tasks through process automation or digitization, but then miss much bigger opportunities by forgetting to use Little’s Law to address issues with Lead Time.
Little’s Law states that Lead Time equals the amount of Work-in-Process divided by Average Task Completion Rate. The speed of any process is inversely proportional to the amount of Work-in-Process. Therefore, identifying and eliminating unnecessary activity will inevitably improve Lead Time. Decreasing Work-in-Progress increases speed and shortens Turnaround Time without needing to address average completion rate.
Optimizing Turnaround Time: What’s Your Process Cycle Efficiency?
Often organizations don’t know if their Turnaround Time can be improved, or how to begin to tackle the issue. They don’t have the tools or knowledge to measure Process Cycle Efficiency (PCE), which shows what percentage of Turnaround Time is waste. Process Cycle Efficiency is calculated by dividing Value-Added Time by Total Lead Time. Decreasing Lead Time increases speed and decreases process Turnaround Time.
Measuring PCE allows companies to quantify the opportunity by process or workstream. A low PCE indicates opportunity to initiate improvement engagements.
Conclusion
Legendary UCLA Basketball Coach John Wooden used to say, “Be quick but don’t hurry”, meaning do the activities that matter with speed but with accuracy.
Speed doesn’t have to hurt quality and can provide a competitive advantage in service cost if you decrease Turnaround Time by focusing on the biggest part of processes using Little’s Law and Process Cycle Efficiency.
The Building Blocks of A Robust Implementation Plan
The essence of developing a robust implementation plan lies in working hand in glove with the client at every stage. An implementation plan is a well-charted out agenda of achieving the client’s objectives by breaking down the overall engagement into bite-sized pieces, with each component having an owner and a firmly tracked deadline. This makes it easy for the clients to follow along and gives a clear indication of who is driving the agenda and when we expect to have closure. The objective of the implementation plan is fulfilled only by ensuring that we continuously monitor our progress towards the “end goal” while simultaneously confirming that the client has been well trained on the new process guidelines. To be successful in the design of the plan, investing time in understanding a customer’s existing processes and systems becomes a crucial step during an engagement and helps us determine where the greatest opportunity lies.
Strategies to Identify Implementation Opportunities
The first step is conducting area interviews with multiple key stakeholders to gather all the relevant details including the function of the area, organization structure, information they receive and provide, and to understand any inefficiencies in their existing setup. We also utilize this time to engage the client in having the first round of discussion on the opportunities identified.
For a recent Oil & Gas engagement, the team identified a plethora of opportunities across several areas – Planning, Scheduling, Lab and Operations, that could improve the refinery’s current blending process. The team dug deep to understand the functioning of the stakeholder teams and their inter-dependencies which highlighted potential areas for improvement. We attributed a financial benefit (savings) to optimizing the refinery's blending practice by following Trindent's methodology and recommendations that stem from our deep expertise in this space driven by successful implementations across top refineries across North America. We began by filtering down, identifying, and ranking the opportunities with the greatest impact first and started working our way down the list, keeping our clients well-apprised of what we are working on and ensuring that we are implementing a sustainable change.
Components of an Implementation Schedule
Once we had a final rundown of the opportunities signed off by the team, I started developing a detailed implementation schedule. This included breaking down each opportunity into different tasks and subtasks. Individual tasks can have several stages such as designed, implemented, installed and sustained, and a timeline to enable progress tracking. At this point its imperative to check that the team has adequate resources (both time and personnel) for the successful implementation of the plan in the future. This can be achieved by doing the following:
- Identify the personnel: Assign each task to a stakeholder and keep in mind that – they can make or break the project. Therefore, it is important that one chooses them wisely. A good owner will be someone who has a vested interest in the success of the project.
- Check personnel’s availability: Ask your key stakeholders for their upcoming schedule well in advance. It’s also important to set clear expectations at this point – Mentioning in clear words how much involvement is required – whether it will be 5 hours in a week or 10 hours. This is because while the project may be the priority for you, it may not be for your stakeholders.
Fail to plan and you plan to fail - Benjamin Franklin
Trindent believes in developing custom implementation solutions for our clients, there is no cookie-cutter, one-size-fits-all solution. We achieve this by working very closely with clients to understand their requirements. We identify and address their biggest pain points, and recommend sustainable action plans with an aim to ensure that the learning curve to adapt to the process changes for the client is as low as possible and achievable at the lowest cost. By doing so, we bring about operational efficiencies in daily work practices for our clients, helping them improve their margins and achieve savings along the way.
Trindent Consulting President Named Finalist EY Entrepreneur Of The Year
We're thrilled to share the exciting news that our very own President, Adrian Travis, has been recognized as a finalist in the prestigious EY Entrepreneur Of The Year® 2021 program for Ontario. This esteemed accolade celebrates visionary entrepreneurs who are reshaping industries with innovation and growth. Adrian's remarkable leadership has propelled Trindent Consulting to the forefront of the management consulting landscape, earning us a spot among the top performers in North America.
EY Entrepreneur Of The Year is the world's most prestigious business awards program for leading global entrepreneurs. Each year, EY recognizes business leaders across the country that are transforming our world through unbounded innovation, growth, and prosperity.
"I am deeply honoured and humbled to have been named a finalist of this globally recognized award," says Adrian. "I founded Trindent with a vision to empower companies to be able to achieve top performance without having to rely on expensive software or capital investment. I am grateful for all who believed in us along the way, especially our dedicated and hard-working employees, and our clients who trusted us with their most complex business problems."
Since founding Trindent in 2008, Adrian has championed a vision of excellence, guiding our firm to deliver unparalleled results for our clients. Under his direction, Trindent has become renowned for its specialized services, offering tailored solutions in sectors such as energy, healthcare, and financial services. Adrian's commitment to excellence has enabled us to consistently exceed expectations and drive transformative change for our clients.
"Our success has always been built on our firm's values - perfection with urgency, character before skill, and a passion for solving complex problems. More than ever, clients are looking for targeted, sustainable financial improvements, and Trindent's track record of delivering a 500% or higher return on investment has really resonated in our three focus industries," says Adrian. "Being recognized and named as a finalist for the prestigious Entrepreneur Of The Year award is a testament to the rapid growth trajectory we have maintained since 2008. I am looking forward to what the future holds for Trindent Consulting."
About EY Entrepreneur Of The Year®
EY Entrepreneur Of The Year® is the world's most prestigious business awards program for unstoppable entrepreneurs. These visionary leaders deliver innovation, growth, and prosperity that transform our world. The program engages entrepreneurs with insights and experiences that foster growth. It connects them with their peers to strengthen entrepreneurship around the world. EY Entrepreneur Of The Year is the first and only truly global awards program of its kind. It celebrates entrepreneurs through regional and national awards programs in more than 145 cities in over 60 countries. Winners go on to compete for the EY World Entrepreneur Of The Year title.
About Trindent Consulting
Trindent is a global management consulting firm specializing in solving complex business problems and achieving top performance in the Energy, Healthcare, and Financial Services industries. Since 2008, our unique approach to generating bottom-line improvements has yielded ROI of 500-1,500% in the first year for more than 100 clients across the globe and our results give Trindent the reputation of a firm that Makes It Happen™.
It is ranked as one of Canada's Fastest-Growing Companies by Canadian Business and PROFIT/Growth List for seven consecutive years from 2014 to 2020. Trindent was also named one of the Fastest-Growing Consulting Firms by Consulting Magazine from 2015 to 2020.
Source: Newswire
Oil and Gas Industry: The Path Forward
As 2021 unwinds, the Oil & Gas Industry and the global energy landscape are experiencing significant shifts. After an extremely challenging year in 2020—wherein this sector experienced major disruption caused by simultaneous price collapse, supply glut, unprecedented demand decline, and a health/economic crisis—the outlook remains uncertain.
Aside from industry fundamentals, other forces remain at play. Amid shifts in major trends, such as emerging technologies, pressure to act on climate change, new regulations, changes in consumer demand preferences and investor activism, most experts agree that a permanent shift in the energy demand curve has taken place. Concerning fossil liquid fuels in particular, there seems to be consensus that demand will rebound to previous levels and grow, albeit at a slow pace, over the next decade and then decline gradually rather than suddenly, depending upon the regulatory pressure on emissions. However, long-term oil demand will remain pressured by several factors such as reduced growth in automobile demand, enhanced engine efficiency in road transportation, and appetite for electrification. As such, oil remains exposed to large swings in all scenarios developed by forecasters. Gas is the lone fossil fuel whose demand is expected to grow significantly in the next decades. Regardless of the outlook and the trends for energy transition, all scenarios have a common theme: that fossil fuels will retain their fundamental role in the energy sector for the next 30 to 40 years.
Recovering From 2020
Worldwide, the Oil & Gas sector experienced great financial turmoil last year during the pandemic. Hundreds of thousands of workers lost their jobs, others were laid off and refinery sites in the U.S. and abroad were permanently shut down. As demand recovers to previous levels, fewer refineries are available to produce the fuels required, and, upstream, about 40 million barrels per day of new production are needed despite severely restricted capital expenditures.
Thus, while the lion share of the attention is focused on electricity, “gray” vs. “green” hydrogen and renewable energy, the 131 remaining refineries in the U.S. still need to refine the petroleum products which drive more than 90% of all transportation; the plastics utilized in PPE manufacturing; the isobutane, propane and propylene used as refrigerant for vaccines and medicines; and the naphtha used as raw material for polypropylene syringes. In addition, about 50-60% of all homes in the U.S. are heated by natural gas. The other half use electricity, though natural gas is responsible for nearly 40% of electricity production, too. In net, fossil hydrocarbons are practically everywhere. Given its role in supplying affordable energy, this sector is too important to fail.
The Path Forward
What then is the path forward for the market participants in the Oil & Gas space? Refining and Marketing independents and Major Oil companies may accelerate their pace to invest in renewable energy due to increased Shareholder pressure to go green driven by the Environmental, Social and Government (ESG) trend and government policies that reward renewables and penalize Oil & Gas. However, despite highly heralded green energy goals, Oil & Gas companies already have a business that is critical to the next few decades. As such, Shareholders and owners need to focus on creating value in the midst of these new conditions, as the industry is entering an era of intense competition and rapid supply response driven by technology.
In the case of refiners there is a heightened incentive to redirect efforts after last year’s turmoil to ensure short- and long-term success. All companies predictably acted to protect employees’ health and safety and to preserve cash, for example by cutting or deferring discretionary capital and operating expenditures and, in many cases, distributions to shareholders. As predicted, these actions were not enough for companies that were financially hard-pressed. Refiners will continue to experience wild swings in inventories, demand uncertainty and changes in the political climate which will continue to affect the stability of this sector. To adapt to these fluctuating environment, leading companies continue their focus on continuous improvement projects with high ROI and short breakeven or payback periods.
The pressure is ever mounting to extract all possible value from optimizing refineries and their supply chains and seizing new opportunities for margin improvement through digitizing refineries. At Trindent, we double down on our recommendation at the beginning of the year: “Refineries should be focused on creating value through precision, limiting product exceedances of minimum-quality requirements by improving product-demand forecasting, blending processes, or using in-line measurement tools. Refiners with optimized operations planning spend less on the components that make up their finished products.”
To operationalize our recommendation, here are some of the areas we have focused on with our clients:
- Identifying the value and finding the “cash register.”
- Setting up the right processes and workflows, including identifying and redeploying the team’s roles and responsibilities.
- Integrating existing software and hardware solutions (e.g., process design, supply chain planning, refinery scheduling, advanced process control, data historians and predictive asset performance management) with real life data and operations to ensure that our clients extract the full value of the tools they have invested in.
- Making sure that advanced technologies such as in-line analyzers have proper modeling plans, as well as calibration procedures, to ensure property analysis precision and profitability when utilized in conjunction with Distributive Control Systems (DCS) Advanced Process Control strategies for blending refinery fuels.
- Making sure there are appropriate diagnostic monitoring tools to ensure in-line analyzers quality control.
- Ensuring a robust Quality Assurance program in data measurement activities, as well as promoting communication between the data testing facilities and operations for validation programs and control & optimization processes.
- Proper use of statistical tools and models to ensure appropriate input to Advanced Control tools such that the tools are tuned and used with unclamped limits for optimization.
- Ensuring proper maintenance (preventative and conditions-based) for analyzers and field instrumentation that are part of the refinery digitizing efforts.
Conclusion
In the longer term, significant shifts in the sector’s fundamentals will continue, and government policies will drive the magnitude of the transitions. Meanwhile, success today and in the coming decades is contingent upon developing efficiencies and rapid responses to ever changing conditions. Trindent is uniquely suited to helping our customers achieve those goals and reposition for the future.