Oil & Gas: The Move From Documents to Data
[ This is a guest post from Neale Stidolph, Head of Information Management, Lockheed Martin UK. Neale will be speaking at The AIIM Conference in San Diego on Information Management in the Oil & Gas Industry. ]
People often seem to think oil & gas companies are leading-edge and have enough money to invest in whatever systems they like, and that they live in a World of 3D models, data and analytics. The real picture is often very different. Exploration and production companies, who search for and extract hydrocarbons, see information systems and information management as something necessary but not something that excites the interest of the board. Data is certainly much in evidence, but information overall is not treated as ‘the new oil’ by the industry. Much of the focus on data is within the geoscience discipline, from the creation of seismic surveys to reservoir modelling and interpretation. The techniques have changed a bit, but mostly we see increases in resolution, frequency and speed of analysis. This clearly improves the odds of making a discovery and reduces the financial risks of drilling.
Engineering is the domain where there are pockets of data and certainly plenty of systems and methods that could help, but it isn’t working very well for many businesses. Most oil companies do not achieve data-centric engineering and do not practice engineering lifecycle management, though they may believe their engineering contractor does this form them. Records, drawings, specifications, datasheets and other documents are variously controlled, uncontrolled, lost, out of date, duplicated, rendered and generally not in an acceptable condition or one that can be used to advantage. The data is there, data which could provide for faster, less risky and cheaper engineering projects. It is not readily available in the right form, cannot always be trusted and spans incompatible systems often involving several firms in the oil supply chain, with inconsistent or missing metadata.
Value is being eroded or destroyed and opportunities are being missed. In most other sectors that would be game-over, and tragically in some cases we see fatal consequences. Why does it persist in oil? Because the industry has been profitable enough to be inefficient and just works around the problems.
So, what’s the issue? Nothing stays the same and what worked in one era may not work in another. The current oil & gas business environment is very challenging. It is tough enough finding and exploiting reserves, be it oil sands, fracking, high-pressure / high-temperature, deep water and often unstable geopolitics. Add to that the problems of huge swings in oil price, fast-rising costs and falling production volumes in mature provinces and you have a perfect storm.
The sector is challenging with huge projects and lots of legacy information changing hands over the life of assets. The digital age is suffering from rising information chaos, scale of growth and pace of change. A documents and records approach is only partially working and does not support easy use of underlying data. Data is the key to analytics and better decision making. This will be the future as resources diminish, risks rise and returns fall.
1. Tackling legacy information
Oil feels like a modern industry but it is one that has existed through a time of great technological change. We have gone from drawings made with pen and paper, to primitive CAD systems, smarter systems with 3D capability and engineering data warehouses. Many firms are holding information spanning these technical generations, archives of paper, microfiche, scanned image files and a range of electronic files or tape media some of which were made by systems that no longer exist, so can’t be easily opened or converted. Value still exists but you have to know where to look and how to do it. Legacy projects can take many years and be very labor intensive and that will not suit the board.
If you are in a firm that acquires an oil field from another firm you should expect a very large and diverse range of information and are unlikely to be given much guidance or structure. That presents a major risk. This first phase is about discovery, what do you have, in what forms and what are the areas of greatest value?
2. Mining data from documents
Once there is an appreciation of what documents or drawings to target it is time to get tactical and deploy appropriate techniques that will make them more useful. Common safety-critical documents include piping and instrumentation diagrams (P&IDs), isometrics, and line lists. If you are working with a scan you can use OCR, but it isn’t easy to do well, you may need to have it re-drawn or at least manually checked by someone with appropriate engineering or document control skills. That takes time and money.
You will also need to validate if the drawing version control is correct as drawings are often marked up for changes but never re-mastered or ‘as-built’, sitting in backlogs that can last years. You will also need to look at current processes to make sure appropriate data capture exists for new drawings, otherwise your legacy stack will just increase. What we are looking to do is relate engineering objects, such as a pump with a tag number, to drawings and other documents. We would also then like to know what class of pump it is, details of its technical and physical features and have the ability to link all that to a maintenance plan and spare parts inventory. This will all support safety cases and ultimately the license to operate. Poor asset lifecycle management will lead to issues such as poor handover from projects to operations, where gaps in information will cause delays, extra costs and inefficiency. We must remove the problems that are leading to duplicated effort and costs.
3. Using data analytically
This is where the action is, where we get the real returns for all our efforts. Much of the labor of information management is about governance, or in other words building a stable foundation for our information. That is a tough and thankless task. Many fall short of even this level of maturity. It is not a popular line to pursue for the CIO, he won’t be making friends. So, we answer it with analytics. Show the business the money to be saved, risks avoided and improved decision making. From the previous example of our pump, analytics can ingest all forms of information concerning this single item. We can use inspection reports (free text), maintenance systems (database), sensor readings (real-time data), images and more. What do we get? Historical analysis and future prediction. We could just replace that pump after a number of hours use as per manufacturer guidelines, but what we really want to do is know exactly how it is performing, how best to manage it and the most cost-effective yet safe way of proceeding. The benefit across an oil business may result in a few percent savings in operations and maintenance. Sounds small? Given that this is often the largest area of expenditure for the business the savings can be very large indeed. One day of lost production can cost millions of dollars. Skilful information management can play a leading role in improving production efficiency and delivering competitive advantage. It all hinges on digging into the data and being smart.
My AIIM 2015 presentation will cover the range of points made in this blog and provide some practical suggestions on solving them.
[ Thanks for reading! See Neale and many other wonderful speakers March 18-20 at The AIIM Conference 2015 in San Diego. ]
About John Mancini
John Mancini, president and CEO of AIIM, is an author, speaker, and respected leader of the AIIM global community of information professionals. He is a catalyst in social, mobile, cloud, and big data technology adoption and an advocate for the new generation of experts who are driving the future of information management. John predicts that the next three years will generate more change in the way we deploy enterprise technologies and whom we trust with this task than in the previous two decades. His passion about the evolution of information workers into information analysts spurred John to establish the Certified Information Professional (CIP) program to enable anyone, anywhere to benchmark and develop new and strategic skills. His commitment to education includes the continual development of leading-edge training and publishing of ongoing industry research to help guide new thinking. As a frequent keynote speaker, John offers his expertise on the transformational challenges and opportunities facing information professionals and attracts over 100,000 visitors annually to his blog Digital Landfill. He has published six e-book titles including “#OccupyIT — A Technology Manifesto for Cloud, Mobile and Social Era” and the popular “8 Things You Need to Know About” e-book series. He has a Klout score in the high 60s, is ranked #5 in online SharePoint influence by harmon.ie and #42 in the KnowledgeLake SharePoint Influencer50. John can be found on Twitter, LinkedIn and Facebook as jmancini77.