Jump to content

Nadine Cranenburgh

Core Member
  • Content count

    6
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Nadine Cranenburgh

  • Rank
    Participant
  1. Description: The Internet of Things is creating a whole new digital agenda for oil and gas. This case study details how DiUS helped Environmental Monitoring Solutions use the cloud and IoT to tackle the global petroleum industry problem of petrol station inefficiencies and make a positive environmental impact. Source: Based on a webinar delivered on 1 August 2017 to the Applied IOT Engineering Community of Engineers Australia by Zoran Angelovski, DiUS Principal Consultant and Russell Dupuy, Managing Director of Environmental Monitoring Solutions Biographies: Zoran Angelovski has ridden the wild technology wave for over 20 years. He has a background in hardware development, broadband telecommunications and more recently electric vehicle chargers, smart energy devices and IoT products. Russell Dupuy has over 25 years’ experience in fuel system automation. He is an industry leader, disruptor and innovator. Forged from a formal engineering background, he has developed leak detection systems and wetstock management solutions for major oils in Australia, Europe, Japan and the USA. With a passion for the environment, Russell leads a number of environmental and industry workgroups to drive innovation and sustainability for what is often referred to as a mature and dirty industry. Title: Disrupting Retail Petroleum Introduction This case study describes the development of the Fuelsuite remote monitoring and 24/7 support service for retail petroleum outlets. This system connects onsite data from service stations to the Cloud in a scalable and cost effective manner to provide insights to clients in order to anticipate issues before they occur. A conceptual diagram of the Fuelsuite solution is shown in the diagram below. Diagram courtesy of Zoran Angelovski, DiUS and Russell Dupuy, EMS The petroleum industry can be divided into two segments: upstream petroleum: exploration of crude oil; shipping of crude and refined oil; and cracking to the finished consumer product downstream petroleum: bulk storage of oil at major storage facilities around the world; distribution and transportation of the oil; and retail marketing for consumer consumption. There are several examples of uptake of IoT systems in upstream petroleum: in seaboard terminals, refining or cracking plants, and ships. However most are usually embedded in some form of MESH or SCADA system. Fuelsuite is an IoT solution for downstream petroleum, which at this point has not shown good uptake of IoT technology. The retail petroleum market incorporates approximately 540,000 developed retail service stations around the globe, as shown in green on the diagram below. For the markets coloured dark grey in the diagram, verifiable information on the number of service stations is not available, however, research conducted by DiUS indicates that in excess of one million retail service stations exist. Diagram courtesy of Russell Dupuy, EMS and Zoran Angelovski, DiUS Developed retail petroleum markets typically have a point of sale and self-serve multi-hose fuelling for consumers. They also feature cash as well as other payment systems, and a high level of equipment automation. In contrast, attended sites, for example in Africa, generally operate on a docket or a cash system. Over the last 15 years, the number of manufacturers supplying petrol stations with equipment globally has decreased from 250 to 50. There are five dominant manufacturers, which are predominantly US- or European-based. These five companies are heavily invested in acquiring companies through consolidation, and with continuing proprietary systems for commercial reasons. Client profile and IoT solution goals Target clients are retail petroleum marketers, with the following profile: own and operate up 800 service stations sell up to 3 billion litres of fuel per year spend up to $15 million per year cleaning up spills and leaks spend up to $45 million per year in maintenance. Typical client technologies include the following: POS-BOS automatic tank gauge for measuring fuel automated dispensers to deliver fuel to the hose intelligent pumps to push the fuel from the tank leak detection systems water management and monitoring systems fridges and air compressors pie, coffee, slurpee, bain-marie sensors for a range of things The goals of the IoT solution are to reduce: environmental spend by greater than 50% maintenance spend by more than 15% per year fuel variance more than 0.2%. Challenges In retail petroleum, the various systems at a service station are not integrated, and clients are resistant to open architecture solutions as proprietary enterprise systems available from the small pool of global equipment manufacturers offer commercial benefits. Often, service stations run on old hardware and protocols such as a current loops connecting petrol pumps to point of sale terminals, which are mostly standards compliant, but may produce signals which are out of specification. Many retail service stations have automated technologies but revert to manual processes such as metering the fuel being delivered into tanks. This leads to safety and environmental issues including: employees including being struck by moving vehicles or assaulted by customers above ground spills leading to serious fires. Another challenge is maintaining underground tanks to meet environmental compliance standards for preventing fuel leaks. In the US, over a ten year period to 1998, 1.5-million underground tanks were closed due to non-compliance There were 380,000 sites to be cleaned up, at a run rate of 19,000 a year. Inventory management are often quite basic, resulting in high fuel variances, which means that clients are unable to accurately account for fuel underground. Solution The solution developed by Environmental Monitoring Solutions (EMS) incorporated the following steps: develop hardware to connect devices on site connect it to the cloud build a leading cloud platform migrate our smarts into the cloud choose a build partner choose the right platform. These steps are described further in the subsections below. Develop hardware One fundamental challenge was to develop hardware to connect all the devices on site. As there were commercial benefits in clients maintaining their existing enterprise equipment, it was decided to create a custom device for use with existing equipment on site at retail service stations, rather than swap out existing equipment to use a range of third-party devices. Firstly, the solution needed to connect to the gauge on the fuel tank, in order to collect fuel levels, water detection, fuel leakage, temperature and other readings. Secondly, a custom piece of hardware (a pump communications module) needed to be designed to connect to the pumps to detect how much fuel was being dispensed. This module had to be non-intrusive to the rest of the current loop that physically connected equipment on site. This allowed the data loop to be closed in terms of how much petrol was underground in the tanks, and how much was being dispensed through the pumps. The third task of the initial phase of the project was connecting to the price board. This was especially critical for remote sites with no attendant to change the price on site, so this is a task that ideally needs to be done remotely. The aim of this connectivity was to build the capability to collect data that is available in the Cloud that can be analysed in real time using advanced data analytics techniques. The availability of this data will help shift clients from reacting to problems that have already occurred to anticipating problems before they occur as shown in the diagram below. Diagram courtesy of Russell Dupuy, EMS and Zoran Angelovski, DiUS Connect to cloud and migrate system It was then necessary to build a leading cloud platform, as the existing legacy system was outdated: it was an enterprise website, not a true cloud application. All algorithms and intelligence needed to be migrated into that cloud. A physical diagram of the Fuelsuite solution is shown below. Diagram courtesy of Russell Dupuy, EMS and Zoran Angelovski, DiUS As shown in the diagram above, the Things (tank gauge, pump communications module and price board) at the petrol station were connected via a single board computer module and LTE modem to the cellular data network. Data was transmitted via the data network to the IoT Cloud infrastructure to consumers: the Fuelsuite management tools and the users that used the analysed data to take action to prevent problems before they occur. For example, turn off a pump when water contamination is detected. Build partner and IoT platform EMS chose DiUS as their build partner. The platform chosen was Amazon Web Services (AWS) for both Cloud and IoT. An architectural diagram of Fuelsuite is shown below. Diagram courtesy of Russell Dupuy, EMS and Zoran Angelovski, DiUS At one end are the Things, (devices and hardware). As it is intended to deploy thousand of Things over a multitude of petrol stations, the IoT infrastructure provides an effective way to communicate over a network across to the Cloud so that the Fuelsuite management tools can process the data and deliver information to users. The solution leverages an IoT connection from AWS, which incorporates an SDK software development chip, but essentially it operates on a simple single board computer module that gives connectivity from the remote end into the hub or the IoT gateway that is in the Cloud. This provides a secure end-to-end connection across the mobile data network. It also provides mechanisms to authenticate the devices using AWS generated certificates. Additionally, the solution needed a protocol that runs across that mobile connection. The protocol chosen was MQTT, because it will operate in a bandwidth-restricted environment. This was important because the developers anticipated the future deployment of narrow band IoT technologies and wanted to be able to leverage it. MQTT is also light protocol, so when thousands of devices are deployed. This will reduce the cost of the data plan necessary to communicate with thousands of deployed devices in many service stations compared to HTTP and other internet-based protocols. Once the data is in the Cloud, it is routed via the AWS Rules Engine. This is a very simple way to route data to other services, so that it can be manipulated, managed and delivered to the consumer. It also provides a very clear demarcation between the Cloud and the remote devices. If the platform provider brings out a new feature, or if new capabilities are required for the Fuelsuite solution, the routing and be easily adjusted on the Cloud side only without relying on a firmware upgrade, which is very convenient because with many devices out in the field it is desirable to avoid upgrading firmware remotely. The equipment supporting the service is managed through the Device Shadow, which provides a simple way of determining whether a device is online or offline and a clear view of the requested versus reported configurations. If there are any differences, the Cloud configuration can be reconciled with the actual hardware configuration and the equipment can go away and do its job. The Device Shadow also works with intermittent connectivity, which is really critical when the data is connected over wireless networks such as 3G or 4G. Lastly, the Device Shadow enables pre-configuration of devices before the devices being physically available. As devices are installed, their configuration is reconciled if there are any differences to the pre-configuration settings. This allows field staff to operate without needing to make changes in the Cloud. Other services used in the Cloud are: Kinesis Stream: which provides a scalable way to capture and manage the large volumes of data that are funnelled into this concentrated point Firehose: which provides the ability to stream data to other upstream services, such as Elastic Search and the notification queues Elastic Search: enables the use of indexed searching. Next steps The custom hardware produced for this solution will be rolled out to about 1,000 sites in the second half of 2017. This will make tangible gains in the data collected by the industry, as currently only around 20% of tanks are connected to the internet, and they provide data about once a day. There is no pump data being connected at present. Further investigations will also be conducted into how to use the data to better target environmental monitoring and use limited resources to get better outcomes.
  2. Interactive Analytics

    Introduction IoT applications discover and store huge volumes of data from multiple sources, and process it using various forms of data analytics. The results of this analysis need to be presented in a way that is useful to end users and aid in their decision making. Presenting data in visual forms, such as charts and graphs, enables users to understand what is happening at a glance and conceptualise what further investigations are required to understand a complex phenomenon (such as building vibrations). Interactive analytics are a set of tools that allow data analysts to investigate the data and create visualisations that present easy to read results and figures that relate directly to users' requirements. They also provide the ability to share these results with others. The main vendors of interactive analytics tools include: Power BI from Microsoft and Tableau, IBM, SAP Oracle, SAS, MicroStrategy and QlikTech. Sources: The information on this page has been sourced primarily from the following: Case Study titled Studying movement behaviour in a building: A case study of obtaining analytics from IoT Data
  3. Data Integration

    Introduction Data for IoT applications often comes from many heterogenous sources, which may not be easily brought together for analysis. Data integration is one approach that is used in IoT solutions to allow disparate data to be used to provide useful decision-making support. It is a tool that is often used in data warehousing. Data integration makes use of Extract, Transform, Load (ETL) tools. There are also other new tools including Enterprise Feedback Management (EFM). These are the tools that are utilised to bring the data from one point to the other. ETL vendors are listed in the diagram below. Diagram courtesy of Jorge Lizama, GHD Sources: The content in this page was primarily sourced from Case Study titled Studying movement behaviour in a building: A case study of obtaining analytics from IoT Data Further reading: Data integration info website
  4. In-Memory Computing

    Introduction A major challenge In processing the volume of data required for IoT solutions is sourcing sufficient computing power to perform before-aggregation computations. When this needs to be carried out record by record, many traditional data environments, which use disk storage to store data, are not sufficiently powerful to complete the task, or may take days to deliver results. An in-memory database (IMDB) is a database management system that primarily relies on main memory (RAM) for computer data storage. It can be thousands of times faster than a disk storage database and is useful for real-time analytics that need to happen very quickly. In-memory computing also has very good compression algorithms, which makes it possible to better utilise the storage space. Diagram courtesy of Jorge Lizama, GHD Some key vendors of in-memory computer systems are shown in the diagram below. Diagram courtesy of Jorge Lizama, GHD Sources: The information on this page has been sourced primarily from the following: Case Study titled Studying movement behaviour in a building: A case study of obtaining analytics from IoT data
  5. Semantic Sensor Networks

    Introduction A very important technology in the sensor discovery area, introduced in 2013, is semantic sensor networks (SSN). Describing sensors and their data in a consistent and common framework makes it easier to discover them. This particular semantic system network description was developed by a consortium of organisations around the world called the W3C. The W3C is also working with the Open Geospatial Consortium (OGC) to develop clarify and formalise the standards landscape for spatial information on the web. SSN is an ontology that describes aspects of the sensors and the systems using them. It describes the deployment, the data, the system, the operating restrictions, the devices, the measuring capability, and the constraints of the sensors. The SSN can be focussed on: a sensor perspective: what is sensed, how it is sensed a data or observation perspective: observations and related metadata a system perspective: systems of sensors a feature and property perspective: features, properties of features and what can sense them. The SSN ontology can be downloaded from the W3C website. It has been used to annotate semantic web open-link data technologies, and can be queried using tools such as SPARQL. SSN is used extensively around the world, especially in Europe, and is the de-facto standard in this area today. Paradigm shift SSN represents a paradigm shift from the hard-coded vertical approach of referencing sensors by name or number to discovering sensors based on a description of the sensor, the sensor platform or the information it can provide, as shown in the diagram below. Diagram courtesy of Prem Prakash Jayaraman, Swinburne University of Technology Ontology modules The SSN consists of several ontology modules as shown in the diagram below. Diagram courtesy of Prem Prakash Jayaraman, Swinburne University of Technology These modules provide the ability to describe sensing platforms, sensors, and capabilities at a minute level. The sensor is described using an HTTP URI. For example, a sensor could be an air temperature sensor which was made by a particular manufacturer. It could observe air temperature and humidity. The unit of measurement of this observation could be Celsius or Fahrenheit. Any machine can look at this URI, get a description of the sensor, and be able to understand exactly what the sensor produces, how it produces this information, and from where the data is coming from. Other entries in the sensor description could include accuracy, location, owner and frequency of measurement. An example is shown in the diagram below. System developers can develop queries using properties and features that are relevant to their solution. Diagram courtesy of Prem Prakash Jayaraman, Swinburne University of Technology Sources The information on this page has been sourced primarily from the following: A webinar titled IoT application development with open data-driven computing platforms by Prof Dimitrios Georgakopoulos, Swinburne University of Technology A webinar titled An Open Source approach to the Internet of Things by Prem Prakash Jayaraman, Research Fellow, Key Lab for IOT, Swinburne University of Technology
  6. Machine Vision

    Introduction Machine vision provides an alternative form of data input into IoT systems. Typically it is used whenever there is a requirement for visual inspection and where imagery is available. Images often from surveillance or security systems, but drones and smart phones have enabled a lot more ways to collect imagery. Common applications include asset inspection, product inspection, traffic counting, and can be used to facilitate decision making in the supervisory systems of many applications involved with smart cities. It may even be used as an input to project management of construction projects or monitoring vegetation/species growth in mine site rehabilitation. Machine vision usually works by combining images with other applications such as optical character recognition (OCR) to extract text or facial recognition and face detection. Typical analytical techniques include blob analysis and edge detection to find and classify the parts of the image that are of interest. Machine vision is well-suited to defined, repetitive tasks such as completing inspections and detecting defects. It delivers consist results but is not suitable for tasks with a high level of ambiguity and uncertainty. Machine learning allows the collection and categorisation of data to allow systems to adaptively learn and improve over time by highlighting changes and anomalies in the data. Machine vision differs from human vision. Human vision is adept at managing decision making, change, variation and filling in mission information when images are viewed in context. However, it is not always precise. The key concepts of human vision, machine vision and machine learning are summarised in the following diagram. Diagram courtesy of Ryan Messina, Messina Vision Systems Design considerations The design process for automating systems using machine vision can be an iterative process. The repetitive tasks are replaced with machine vision first, but still using human vision to address variable conditions and decision-making. This system can be continuously improved and further automated using machine learning and information, and addressing environmental factors, as discussed further in the case studies below and the designing for IoT page. A key design question is whether to employ edge computing principles or transmit imagery via a network to another location for analysis. For example in the traffic counting case study below it would be better to just send a total count of vehicles periodically rather than transmitting the image file. This will require greater processing capability and power requirements in the device capturing the images, so a tradeoff may be required. One advantage of transmitting data rather than images is that the device can be connected using an LPWAN network. However if the image is transmitted to the cloud then more advanced data analytical techniques such as cognitive computing or machine learning may be employed. A related design question is whether data needs to be transmitted in real time or not. Some applications such as product inspection need to take immediate action if faults are detected. However, for battery operated IoT devices power is always a primary consideration. Techniques such as only taking a photo when a person, object or animal moves past a camera may be employed to minimise power usage. Another concept which is becoming more important for automated machine vision systems is 'transfer learning': or the ability to transfer some of the learning of established automated systems to different, but related applications. Currently, one of the major cost constraints in machine vision systems is the labour required to label the data, and check and validate the system. So this aspect needs to be considered in the business case for machine vision systems. Finally, machine vision systems can be vulnerable to changes in its environment environment. For example, an automated car could be designed to stop at a traffic light that looks a certain way. If the traffic light design is changed, the car may no longer stop for a red light. A full risk analysis should be conducted in the context of each application. Case Studies Machine Vision: Traffic Counting One example of iterative design of an automated system using machine vision is a traffic counting system designed by Messina Vision Systems. Before automation, traffic engineers used human vision to count vehicles, and make judgements based on the types of vehicle, incidents and speed in video files that were between 24 hours and one month long. They engaged Messina Vision Services to automate the process with a required 99% accuracy. The first iteration of the solution involved designing a motion sensor to break down the repetitive task of watching the video of the road or intersection, and the more complicated task of classifying vehicle type, speed and actions. The motion detector took snapshots, or images of sections of the video. It only achieved 90% accuracy, but engineers were able to put the snapshots from the motion sensor in a folder and spend an hour, rather than 24 hours, counting the vehicles to ensure accurate counting of vehicles. The next iteration of the design addressed the two main flaws in the original system: double counting cars and counting blank spaces where there were no cars. A script was written to analyse the machine-processed video and remove duplicates and empty frames. It achieved 95% accuracy. This used the machine learning concept of clustering. To increase the accuracy of the system further, the client addressed environmental issues such as birds perching on cameras, large trucks blocking the camera image and sun reflections on the lens. This was done by using quality control measures to position the cameras in such a way that these environmental factors were minimised or eliminated. Machine Vision: Asset Inspection A second example is a system designed by Messina Vision Systems to automate pit inspections using machine vision. This was originally a four week manual process: first collecting the data, then converting it to usable information, reporting and reviewing. The first iteration was a simple system where photographs were taken with a smart phone to track changes, and a form filled out during the manual inspection. This reduced the process to three weeks as usable information was collected on site. The next iteration was to pre-fill information in the reports with information collected on site using tablets, which reduced the task to two weeks. In the future, this system could be further automated using robotic inspectors to perform the inspection, and predict when they need to perform them. Sources: The information on this page has been sourced primarily from the following: A webinar titled 'How Machine Vision Helps Realise the Smart City Concept' by Ryan Messina, Director and System Engineer, Messina Vision Systems.
×