Jump to content

Tim Kannegieter

  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by Tim Kannegieter

  1. until
    View the recording: This webinar has passed. The recording is available free for EA members on MyPortal. Navigate to Industry Applications > Smart Cities ----------------------------------------------------------------------------------------------------------------------------------------------------------- Presenters: Ryan Messina, Director and Systems Engineer, Messina Vision Systems What you will learn: How to design for humans and why machines think differently. What role machine vision brings within larger networks. How uncertainty affects information systems. Developing information systems that grow and cope with tacit knowledge. Description: This presentation will discuss machine vision and unstructured data types in the use of IoT systems. These technologies unlock many new opportunities, some examples of these are generating traffic flow control and monitoring, feeding into an IoT system and how this information can benefit a Smart City. We will discuss methods to measure environmental conditions that can impact how systems operate in the tested conditions. Further, we will suggest methods of contingency planning using information to turn a human-operated task to semi-autonomous and could eventually be capable of being a completely autonomous system. About the presenter: For the past three years Ryan Messina has been the Director and Systems Engineer at Messina Vision Systems focused on delivering machine vision solutions for rugged environments, his experience covers many industries including surveillance, manufacturing, Defence, agriculture and infrastructure. Ryan has a bachelor of Engineering (Robotics and Mechatronics) from Swinburne University and uses machine vision to assist gather additional information.
  2. Tim Kannegieter


    IoT Core is their main offering. See
  3. Interesting article on the US Federal Communication Commission (FCC) in the US issuing a thinly veiled warning targeting manufacturers and marketers of IoT devices in the US. The bottom line is that any RF emitting devices have to go through the usual compliance checks, which is nothing new. But the FCC is essentially giving notice that it is aware of the potential for IoT devices to slip through the cracks and it will not let this happen. See the FCC's Order and Consent Degree and an article explaining it.
  4. Tim Kannegieter


    Information that is stored in a cloud server is usually "guaranteed" to be backed up and is redundant. When one stores information on a cloud server one cannot control exactly where it is stored. Obviously, it sits on a hard drive somewhere in a server farm but the user has no knowledge of that. For mission critical data some designers choose not entrust that to the cloud or synchronise it and back it up via their own servers. Any such data can be periodically download into your own local server which is accessible to the internet via the appropriate firewalls and security measure. As the cloud industry grows and systems are increasingly designed to work in the cloud, reputable providers will be trusted to provide a highly secure and a highly redundant back-up service.
  5. Tim Kannegieter


    There are a number of key challenges remaining before the opportunity presented by the Internet of Things can be fully realised. Batteries There is a perception that the cost of a "Thing" can be in the order of a dollar or so, due to the tremendous reduction in the cost of embedded electronics. While it is true that you can buy a system on chip communications device for approximately a dollar, the battery might cost $15 or so for a 10 year life. While the cost of batteries remains an order of magnitude above the electronics, mass deployment of IoT devices may be cost prohibitive. IPV6 Addressing One aspiration in the Internet of Things is that everything has an IPV6 address. However, at this stage its not possible for a IoT system developer to simply order thousands of IPV6 addresses in a batch. This is mainly because IPV6 addresses are managed by the Internet Assigned Numbers Authority and are granted to Internet Service Providers who are generally not yet organised to support IoT system developers. At present developers are often creating devices that are capable of being addressed with an IPV6 formatted address, in the anticipation internet service providers do provide such services in the future. At this point in time (early 2017) IoT architectures must use proxies which give virtual IPV6 addresses to Things and connect to a gateway that actually communicates to the network. One way of identifying devices is to embed an EUI-64 chip as per RFC2373. This gives effectively a unique MAC address to every device and the EUI-64 becomes part of the overall IPV6 address. Industry collaboration There is need for a coherent national strategy to develop our IoT industry to foster innovation through the uptake of IoT, as well as startups based on IoT technologies. There could be greater adoption of crowd based innovation. There needs to be a sectorial approach and liaison with existing industry growth centres. For example in food and agribusiness, CASA regulations restricting use of drones to "line of sight" is inhibiting the uptake of many potential IoT solutions. In utilities, smart metering will enable many applications but could be encouraged. For smart cities, local governments need to be encouraged to experiment and liaise with other city-wide authorities. There needs to be a better approach to open data, interoperability, and encouraging IoT-led growth based on shared data across supply chains and industries. This requires development of national or sector data sharing principles, guidelines for contracting and allocation of liability around shared data. The current frameworks for spectrum and licencing need to be revewed to take into account the special needs of IoT. This includes the need for mass sensor connectivity, and better ways of sharing the licenced and unlicensed spectrum, as well as real time monitoring to enable spectrum farming. There also needs to be better guidelines around security, including data protection. There is a need for consumer trusted models and updates to the Telecommunications act to cover IoT security. These issues are being addressed by the IoT Alliance Australia.
  6. Recording: This event has now passed. If you are an EA member the recording can be accessed for free via MyPortal. Navigate to Industry Applications > Other ________________________________________________________________________________________ Presenter: Zoran Angelovski, DiUS Principal Consultant and Russell Dupuy, Managing Director of Environmental Monitoring Solutions What you will learn: How this solution delivers real-time visibility on fuel sites - managing how they proactively maintain sites, optimise delivery and build data models for trends. The efficiencies being driven through real-time operational visibility. An overview of the tech underpinning the custom-built sensors. How they designed a flexible and remotely upgradable monitoring solution using passive snooping techniques for use with potentially fully loaded station circuits. Description: The Internet of Things is creating a whole new digital agenda for oil and gas. This presentation is a case study on how DiUS helped Environmental Monitoring Solutions use the cloud and IoT to tackle the global petroleum industry problem of petrol station inefficiencies and make a positive environmental impact. About the presenters: Zoran Angelovski: Riding the wild technology wave for over 20 years, Zoran has seen it all. He has a background in hardware development, broadband telecommunications and more recently electric vehicle chargers, smart energy devices and IoT products. If he’s not tinkering in the hardware space, you will find him exploring Victoria’s beaches and high country. Russell Dupuy: With over 25 years’ experience in fuel system automation, Russell is an industry leader, disruptor and innovator. Forged from a formal engineering background, he has developed leak detection systems and wetstock management solutions for major oils in Australian, Europe, Japan and the USA. With a passion for the environment, Russell leads a number of environmental and industry workgroups to drive innovation and sustainability for what is often referred to as a mature and dirty industry. When not disrupting his industry, Russell keeps busy mountain biking, surfing, snowboarding and gardening.
  7. until
    Recording: This webinar has now passed. Members of Engineers Australia can view the recording free on MyPortal. Just logon and navigate to Practices > Legal ____________________________________________________________________________________ Presenter: Ashley Kelso, Senior Associate, AustraLaw What you will learn: Practical strategies for reducing and resolving disputes on IoT projects Key legal risks to address throughout the IoT project lifecycle Effective contract management using IoT technology Practical lessons from actual IoT litigation Legal risks and rights outside the contract Description: Internet of Things (IoT) projects are a complex multiparty undertaking, requiring the cooperation of asset owners, technology providers, consultants, communication service providers, and a range of other stakeholders. Adding to this, the securing of legal rights for the use and maintenance of the ICT systems is critical to the ongoing operation of these projects. Successful delivery and operation of these assets requires effective communication, a sound understanding of the legal landscape, and practical systems and procedures to secure the strength of your legal position if things escalate. This webinar will take a practical approach, discussing the various perspectives of the stakeholders and the strategies to promote successful project delivery. Lessons will be drawn from past cases to illustrate how to do it, and how not to do it. About the presenter: Ashley Kelso holds degrees in Mechatronics Engineering and Law. He is also a member of Engineers Australia and the Applied IoT Engineering Community. Ashley has worked in project management and systems engineering roles at the Department of Defence and is skilled in navigating the political dimension of projects, technical writing, and communicating technical issues to non-technical decision-makers.His legal experience covers work in equity, administrative law, contracts, civil litigation and alternative dispute resolution. More recently he has turned his attention to assisting those in the engineering and technology industries, with a particular interest in the Internet of Things. He heads up AustraLaw, a commercial arm of Kelso Lawyers, focused on IP, contracts, and dispute work.
  8. Tim Kannegieter


    Gateways are a key device in IoT systems, typically connecting IoT devices in the field with the internet. The data rate required between the RF module on the IoT device and the gateway drives a number of design decisions. For a higher data rate, adding some add edge intelligence and some processing at the tracking device level may be required. That could be a processor module or an RF module with an integrated processor. Rather than downloading raw data, adding edge intelligence can reduce wireless network traffic by only uploading data when a particular event changes. Similarly, from the gateway to the cloud, when using a 3G, 4G or satellite link with high data cost, restricting the amount of data might be warranted. Again, a typical strategy is to program the Gateway to filter data down to only what the system users actually need. That could be achieved with a Python script that can be downloaded to the gateway to tell it which sensors are connected. Clearly, an RF module in the gateway that supports the same RF modules used to connect to the sensors is required. An RF module could be a meshed radio topology such as Zigbee. Or is might be a 920 MHz module integrated into the gateway. Data is sent from the gateway into the cloud via fixed Ethernet, LTE or Satellite. Security may be implemented at that level depending on the sensitivity or privacy requirements. An intelligent gateway that implements some level of security would then be required. Communication considerations Deployments of large networks will require gateway management which provides network management capability from the cloud. Today management data is usually segregated from the application data. That is, a separate system which just manages the gateways and the RF sensors connected to them. Functions that the management interface would provide includes indication of whether the gateways are up or down, remote reset capability, remote firmware update capability, and remote software or configuration changes capability, which might be required due to new sensors being connected to the gateway. In addition to the gateway management, management of the end devices is required. The status of all the IoT devices connected to each gateway, which sensors are configured, and the sensors’ status needs to be monitored. Additionally over the air updates of the RF modules themselves are required when a new radio firmware is released, or for any application update in the case that the end device has such intelligence. Another consideration is how to automatically establish a secure connection when field devices wake up after being asleep. A gateway needs to be designed to auto-discover devices in its vicinity and auto-authenticate them as a trusted device. Security considerations The sensor gateway security has to be considered. Encryption of those links is an option, or application layer security could be implemented in the gateway. Another part of security is clearly the gateway to the cloud. How can the data going back up to the cloud and the controls coming back be secured? When controlling a farm application or some form of SCADA system security is a significant concern. The gateway itself is typically Linux based, not dissimilar to consumer WiFi routers that can potentially be hacked. As engineers consideration for the implications should a gateway be compromised are important. Malicious applications could be added or sensitive data could be sent to another server or invalid outputs could be sent back into the system. One solution to prevent a gateway being compromised in that way, which is not the only solution, is to implement a lock in the application layers. Intel have produced an IoT gateway development kit and others are made by a variety of vendors which implement similar lock down capability. Essentially the gateway is locked down such that is still has all the functionality requirements in the gateway but it uses Wind River Linux, which is now part of Intel and McAfee Embedded Controls. That allows white listing applications such that only a defined list of applications can be installed and will be able to run on the gateway. Since Manageability Security Connectivity and a Run Time Environment is implemented on the gateway it can be locked down using that software stack. Multiple vendors have developed their own gateways using that stack, which is available to gateway vendors. Other gateways have their other security systems but it is something engineers and designers need to be familiar with IoT applications. Sources: Material on this page has primarily been sourced from the following: Presentation by Phillip Lark, Engineering Manager, Braetec titled Front End Integration: Connecting sensors to the cloud
  9. RF topologies are a key consideration in designing communication networks. Types of topologies include star networks, point to multipoint star networks and meshed networks. Meshed networks provide the advantage that they are self-healing. One device can communicate through other devices, using multiple paths where available, to get the destination radio. An example of a meshed radio network is Zigbee. There are other proprietary meshed radio networks too which might be considered if longer distances, intermediate radio hops and self-healing capability is required for a network. Meshed radio systems are a much more flexible technology. Sources: Material on this page has primarily been sourced from the following: Presentation by Phillip Lark, Engineering Manager, Braetec titled Front End Integration: Connecting sensors to the cloud
  10. Tim Kannegieter


    There are a range of antenna options such as reverse polarity SMA, which is a type of screw on connector commonly seen on small two way radios. RPSMA will typically be used where you want the antenna externally mounted on a case such as a vending machine. U.FL connectors are an alternative type of connectors which are internal. For price sensitive applications a wire antenna directly soldered to the module may provide sufficient performance. The RF receiving rate of antennas needs to be considered in detail. A whole lot of design considerations come into play there. Is there strong sunlight? Is radar required? Is UV protection required? Is mobility required? Are there environmental considerations, such as exposure to water. Are there physical space constraints? What physical security is necessary, for example, to prevent and/or reduce the impact of theft and vandalism? One possible strategy might be to select a cheap antenna and implement maintenance plans to replace them as required, and accept the expense of lower performance. Another is adding redundancy such that damage to one antenna will not disable the whole system. The kind of connector is an another consideration. How do we physically screw the antenna in? How robust does it need to be? The antennas above left are stud mounted connectors. The entire assembly is waterproof. Some typical examples of antennas follow. The easiest resonant antenna is a 1.4 wave whip. The speed of light, denoted as c is 3x10^8 per second. Radio engineers think of a quarter wave as 300 meters divided by the frequency in MHz. A quarter wave is 75 meters divided by the frequency. For example, at 750 MHz, a quarter wave is .1 of a meter or 10 cm. A 920 MHz quarter wave whip might be about 9 cm. A 2.4 GHz one is about 3 cm. There are various types of antennas. Rubber duckies are a common one used for two way radios. We can use patch antennas put on ceramic element. Yagi's are another, which can have multiple elements and which have the characteristic that they are very directional. A purpose built antenna design needs to cover all the frequencies being used for the application, which may be more than one. A 920 MHz system, for example, may also require GPS, 2G/3G/4G or satellite communications, for which a custom antenna could be constructed to service all these required frequencies in one antenna. The effective isotropic radiated power, EIRP, is a theoretical concept. A point radiator is said to radiate out into a sphere in all three dimensions which is not a very accurate model of actual antennas. Practical scenarios must consider the antenna gain. Most importantly the direction of that gain results from restricting the angle of radiation. Restricting the angle of radiation to 90 degrees, for example, will create a high gain antenna. For a Yagi, the angle of radiation might be 10 degrees, and so an even higher the gain. High gain resulting from a reduced angle of radiation is beneficial, but the designer must also consider whether that is suitable for the terrain. When outdoors and in hilly country for example, a high gain antenna might miss the other antenna because the angle of the radiation is two restricted. Similarly for a yagi antenna. The designer must consider the tradeoff between higher the gain and the more restricted the angle of radiation to determine if the antenna is suitable. Antenna design must consider power restrictions. In a 920 MHz scenario with a maximum of one watt of EIRP and a really high gain antenna, it may be necessary to back the power off. Most radio modules will allow selection of the power output such that the power can be reduced as the gain of the antenna. It's really important is increased. It is the whole system (transceiver and antenna in combination) that is not allowed to radiate more than one watt EIRP. Sources: Material on this page has primarily been sourced from the following: Presentation by Phillip Lark, Engineering Manager, Braetec titled Front End Integration: Connecting sensors to the cloud
  11. Tim Kannegieter

    Smart Cities

    Of all the fields that IoT could be applied in, the one that has received the most attention and hype is how it will enable the concept of a “smart city”. Smart cities are those that leverage ICT systems to enable “smarter” decisions and more efficient processes in the management of community assets. The IoT greatly expands the capability to bring this “intelligence” to a broader range of assets that previously were not digitized. Cities often have multiple but disconnected smart programs around topics such as transport, energy use, air quality and so forth. However, the aspiration of the smart cities concept is to address complex issues that cross multiple functional areas to drive better outcomes such as the livability, sustainability and economic viability of our urban environments. Virtually every aspect of the operation of urban environments is amenable to IoT applications – from the work of councils to the operation of city wide water/waste infrastructure, from major community assets like hospitals through to local community initiatives, from regional transport planning to the sharing economy, there is no end to the potential range of ideas. Many of the major industry verticals such as utilities, transport and healthcare all converge in cities, making them a fertile ground for IoT. However, the challenge for the smart city concept is finding consensus across cities to accelerate IoT market development not in terms of vertical sectors but in a multi-disciplinary approach, starting from policy, regulations, designers, engineers and operators. Smart cities are a scale-neutral and a geography-neutral idea. They can be scaled up from a plaza, a park, and a street, all the way up to a region or a nation in any part of the world. A key concept underpinning smart cities is the idea of “open data”. The idea is that government provide data that its sensors deliver free of charge to anyone in the community that may want to create new “smarter” services around that. To that extend, smart cities encourage and empower its citizens to drive the innovation agenda. An example of this is when Transport for NSW release rail, ferry and bus information to companies wishing to create their own apps around public transport. In smart cities, the smart component (technology and data) should not take precedence over the sustainability, liveability and workability of a city, but work to enhance them. Evolution and drivers The concept of a smart city has evolved from a top down technology-driven approach which did not take into account how city systems worked; through technology-enabled outcome-driven change; to a collaborative approach between developers, residents and governments which goes beyond the idea of smart cities as the end pipe of siloed smart solutions to parking or lighting. As well as a global appetite to embrace technology to improve the experience of our cities, another major driver of the global smart cities agenda is reducing the carbon footprint. The construction sector has lagged behind agriculture and manufacturing in adopting disruptive IoT technologies to improve energy efficiency and reduce emissions. There are also opportunities to make better use of enabling technologies such as building information systems and building management systems to drive change in Australia. Standards and guides With such a complicated landscape, a key issue for IoT and Smart cities is common standards. Governmental organisations don’t want to be locked into vendor solutions. Also they’ve got a lot of legacy systems. So a big challenge for city authorities is how to invest and which standards to uphold. A key standards initiative in the smart city space is called the HyperCat. This is a specification that allows IoT clients to discover information about IoT assets over the web. With Hypercat, developers can write applications that will work across servers, breaking down the walls between vertical silos. A group called Hypercat Australia has been formed to support the roll out of the specification in Australia. Another supporting initiative is the PAS 212:2016 standard which provides for an automatic resource discovery for the IoT. On the global stage, standards for smart cities have been around for some years. Smart Cities Council has compiled some of the most common smart cities standards worldwide in a Smart Cities Guidance Note published in mid-2017. In 2013, the Smart Cities Council developed the Smart Cities Readiness Guide as a resource for governments around the world working to develop smart cities. The diagram below shows a framework from this guide which identifies the responsibilities of local authorities in delivering services and infrastructure (dark blue), the general roles that enable them to carry out their responsibilities (orange). Diagram courtesy of Adam Beck, Smart Cities Council The light blue horizontal section of the diagram above shows the digital transformation technologies that will provide the smart city’s ‘smarts’, including data management and analytics. Government support and global initiatives Australian investment in infrastructure, and alignment in local, state and federal funding for smart cities are also providing conditions for successful development of smart cities. The Australian Government has developed a “Smart Cities Plan”. This includes a program for funding smart city initiatives. There are also other government reports and strategies supporting the development of smart cities in Australia (see Further Reading at the end of this page for links to more information). There are a number of associations specializing in this area including the Australian Smart Communities Association. Other nations, including New Zealand, Canada, Dubai, India and the US have also launched initiatives within the global smart cities agenda. Technologies Some of the technologies that have the potential to be used in smart cities are: cloud IoT big data and analytics robotics autonomous vehicles drones wearables computer hardware and software machine vision smart metering of utilities Sources The information on this page has been sourced primarily from the following: Webinar titled 'How Machine Vision Helps Realise the Smart City Concept' by Ryan Messina, Director and System Engineer, Messina Vision Systems delivered to this community on 4 July 2017 Webinar titled ‘A Roadmap for Smart Cities’ by Adam Beck, Executive Director, Smart Cities Council Australia New Zealand Further reading: National smart cities plan (Australia) National cities performance framework Report on the enquiry into the role of smart ICT in the design and planning of infrastructure Smart cities guide for built environment consultants
  12. Description: While IoT discussions often focus on different types of sensors, what is equally important is what you do with the data that has been collected. Buzzwords used loosely, like ‘big data’ or ‘analytics’, can obscure the critical step of transforming data into valuable insights. How are complex formulas implemented in the collection of data? How are cleansing operations performed? How is data presented in a way that people can really make decisions with it? This case study looks at what to do with IoT data after it is collected. Source: Webinar titled Obtaining Analytics from IoT Data by Jorge Lizama, lead architect for data and analytics solutions, GHD Biography: Jorge is part of GHD’s initiative to unlock new value for clients by combining data analytics with the company’s understanding of global infrastructure and built assets. With more than 10 years’ experience in data technology, Jorge’s career has included leading consulting companies in Australia and Latin America. Introduction Data analytics encompasses many buzzwords including big data, in-memory computing, cloud reporting and cloud infrastructure. The underlying question is how to make data really helpful, which this article will illustrate. We start with the identification of the requirement and the problems faced, cover the technical instrumentation, then look at why certain technologies are decided on and the final outcomes achieved. The requirement The case study is focused on a project implemented in 2016. The brief was around observing, studying and understanding the vibrations or the movement of a building, which hosts a facility with highly sensitive instrumentation. There was a need to understand if a new tunnel that was being built under the building could potentially cause problems for the laboratory through increased movement or vibrations. The instruments are so sensitive that any movement beyond a certain threshold potentially causes a problem either in the readings or in the outputs this instrumentation provides. The building is a very tall, lean tower. It was thought that wind was the main cause of movement and the concern was that the tunnel underneath could increase the susceptibility of the structure to wind-driven movement or vibrations, beyond the threshold for sensitive instrumentation. The project involved monitoring any events that potentially can create a movement over a certain threshold. The aim was to identify potential factors affecting this movement, capture these events, alert clients about them and provide insights about these events. Monitoring The project began with installing instruments including weather monitoring. Seismic sensors were installed to understand the movement, measure the displacement from a specific centre with the Trimble sensor. Also installed were slope or tilt meters to be able to detect any type of slope from a centre or from a 90 degrees angle for the building. All data is collected by a server-based system that is able to capture the data every time one of the sensors provides a reading over a threshold. The monitoring systems sends email alerts when the threshold is reached. For example, in a scenario of more than 100 mm of deviation or vibration, an alert is sent by email to relevant people, and data is collected for the event to improve understanding of what happened. Obtaining analytics from big datasets The volume of the data is massive, with 12 sensors collecting more than a million records a day, which makes it very hard to analyse the data as a whole. That's the crux of the problem. When the data set is large enough or complex, it cannot be dealt with in a standard database or a spreadsheet in the normal way. The concept of big data is very loose. How much data is big data? A good definition is that it is something that cannot deal with traditional technologies. Normally, a million records a day would not be called big data but if the questions being asked of that data cannot be easily answered by traditional technologies, then it becomes complex. Our challenge was to understand how the movement of the structure correlates to all the readings. We started by combining the datasets and looking at how the readings from each sensor were related. For example, how are the direction and the magnitude of a displacement related? The displacement happens in two measures – distance and position of the displacement. So how do they correlate with each other? Is it relevant? How is the relationship visualized in a way that is meaningful? These are questions that are not simple to address using real time analytics. This is where the suite of approaches that encompass the Internet of Things provide a potential solution. However, a practitioner needs to determine which of the approaches outlined below is relevant for any given problem. No to Predictive Analytics For this case study, even though someone could call the correlational analysis advanced analytics, or similar, there is no need to use any advanced tool. Any rapid data miner should be able to create a simple visualisation of real time analytics that are able to give a clear correlation between the different trends. Obviously, looking at the data, there are a lot of things that could be put in place using predictive analytics. For example, understanding what leads to an event and how this may predict when another event can happen. But in this case the aim is not about predicting movement because there is not much that can be done about the vibrations. The challenge is about understanding if the vibrations are going to have an effect on instrumentation or not. No to Big Data With big data one considers the use of a Hadoop type of environment. In this case, we had 300 million to 400 million records to be analysed in one go, but this can get to as small as 20 gigabytes of data. So the volume of data can be compressed enough to be managed without any need to go into big data. Big data is excellent to store and retrieve incredibly massive amounts of data, but the key operations in this case are things like joining databases, different datasets and doing the computing. The problem of the displacement sensor is that the deviation and the direction of the deviation can’t really be interpreted in just one dataset. It has to be transformed. This is going to take a lot of computation for processing it. That’s not what Hadoop is for. Other tools have to be worked with aside from Hadoop, like Spark, to be able to do that type of computing. We are not talking about a big data environment by itself. So for this case we discard the need for big data. No to SQL Database SQL databases are beautiful to put data in. The can be ideal for a lot of sensors, but with just 12 being considered in this case, it’s not really ideal. SQL databases have the same problem with Hadoop when getting data out, especially when including the real-time analytics. Cloud computing Cloud computing would normally be a perfect environment to build this application in. In IBM, SAP or Amazon’s cloud or any of the clouds that are around, it should be kept in mind that there are enough data environments to create a full real-time analytics environment to be able to do what is needed. The only reason that cloud computing wasn’t used is in this case was because all the elements we needed were already in place in-house at GHD. Data Integration Data integration is needed because it was known that the current Trimble Pivot Database was not able to cope with the type of real-time analytics needed to be done. The database could store the data, but was not able to handle complex queries. With real-time analytics, all the computation needs to happen on the spot and provide fast outcome. Data integration makes use of Extract, Transform, Load (ETL) tools. There are other new players like Enterprise Feedback Management (EFM). These are the tools that are utilised to bring the data from one point to the other. In Memory Computing In terms of the core data environment, what is in-memory computing and why it is used? In short, it provides computation power! In databases, one of the biggest problems is before-aggregation computations, after which the data is joined to the aggregate, making it easier to process data. But when one needs to do this record by record, other data environments simply cannot handle it, or will take days to actually deliver the outcome. In-memory computing is able to take on this type of challenge and was needed in this case. Since in-memory computing usually has very good compressing algorithms, it is possible to also utilise the space. In-memory computing means doing all the computing and complex calculations in Random Access Memory (RAM). In-memory is estimated to be up to 5000 times faster than computing that accesses disk storage. It is useful for real-time analytics that needs to happen on the spot. Popular vendors include Apache, Ignite and very recently Geode, as well as the niche ones like MemSQL and VoltDB. There are also the big brands like IBM, SAP, Oracle, Microsoft. Interactive Analytics Data can be moved from one point to the other. Someone can store it and create all the calculations that are needed to be able to have the outcomes, but how can they be presented? That's where the interactive analytics kicks in. Interactive analytics is basically a set of tools that allows data analysts to investigate the data and create visualizations that present easy to read results and figures, allows them to share it with other people, and answer the question of the users or the client. The main vendors in this space include Power BI from Microsoft and Tableau, IBM, SAP Oracle, SAS, MicroStrategy and QlikTech. Presenting data in charts and graphs enables users to understand what is happening at a glance and then refine further investigations of a complex phenomenon (such as building vibrations). This is what was needed in this case. Solution Design The solution we adopted is illustrated below: The sensors capture the data to a 3G router. This is sent to a Trimble Pivot system that collects the data and is the monitoring system generating the alert. This fulfils the core task of recording every event and alerting the relevant people about each occurrence. Next is the batch load and transformation component, the ETL component. In this case the SAP product is utilised but it could be AWS for example or any other product; SAP have their own data orchestration system. SAP HANA is used for doing the in-memory computing. Around 400 million records are able to be compressed to around 20 gigs of in-memory computing. That's one of the key advantages about in-memory. It has very good compressing capability, so can handle a lot of data. It has some limitations though. The biggest available in-memory solution that anyone can input in a place or in a server at this point is 2 terabytes. If you are accumulating a lot of data then the limit will be reached. Also, just to give an idea of costs, doing that in in AWS is around $13-$14/hr. So, it comes with a price tag. Obviously, as technology advances, in-memory computing will start becoming a bit more of a commodity, but still it is a little bit pricey. Next is data analysis, featuring a dashboard interface for users and decision-makers. SAP have one called Lumira, which is one of the competitors of Tableau and Power BI. Finally, we have a publishing server. That is where these data analysts are able to publish the outcomes. This Dashboard answers the client's questions that are asked in real time; the questions go to in-memory computing. Every half an hour, they system receives updates from the data service component of the sensors. That's pretty much the flow. From sensor to data, to decision makers is a lap of around half an hour. It can actually be pushed to real time but it’s not needed in this case. Outcomes The outcome is that the client was able to make the key inference for the project. Two charts were used showing two different metrics below. The left hand diagram is the displacement of the building showing the direction of the displacement and the displacement distance. The other diagram shows velocity of the wind and its direction. Looking at the two diagrams together shows the building is very clearly being displaced in one direction, northwest, while the biggest gusts of wind are either south or east. So this first visualisation seems to indicate that they are not related. This is what the client was most concerned about, that the wind may affecting the displacement and that this would be exacerbated by the tunnelling. Much time was spent digging into the data and eventually it was confirmed that there is no impact from the new tunnel that is being built. Question and Answers Question: What do these tools such as Hadoop and Spark do? Answer: Hadoop is a very intelligent file system for storing massive data in a cheap way and then retrieving it at a very fast speed. It also creates redundancy. If there’s a lock file from IoT data with a million records, these will be submitted, they cannot be captured directly by Hadoop. Some kind of middleware has to be put in between but eventually the lock file is sent to Hadoop. A key thing about Hadoop is that it allows the data to be accessed by the cheapest laptop or computer available. Hadoop avoids the need for massive servers that are very expensive. Spark is an in-memory computing component that goes over the top of Hadoop. That's when the processing of data starts because Hadoop itself is just putting data in and retrieving data. It’s not for doing analytics, or doing real time analytics. Initially in-memory computing was done with a product called Hive, which is kind of built-in with Hadoop but then Spark came in which is now the preferred option by many. What it does is give the client the capability of doing analytical applications, like running statistical models and running complex mathematical models. Question: How would in-memory computing work if this was a cloud-based solution? Answer: Many of the cloud vendors such as IBM, SAP and AWS offer their own kind of in-memory processing product. So the client effectively has an in-memory database but it's not on their network. The only trick is that clients have to take care that there could be data restriction on where the data lives. The cloud vendors will tell you, for example, that their in-memory databases are on data centres in Sydney but the disaster recoveries of those cloud environments are in Singapore and not Australian soil. This can be an issue if, for example, you are dealing with government clients. Question: How will in-memory computing become integrated into greater business functions? Answer: In-memory computing is used to support core business functions than even more than big data. For example, for government to support their internal finances and budgeting they utilise a planning component of an ERP. In-memory computing can be used for forecasting and trends from a whole year of data that’s pretty much impossible with classical databases. Question: How long is the data retained in memory? Answer: In most of the newest in-memory environments a client chooses what they have in a way of hot and cold type of data management strategy. Hot data is that which is really needed for the current computation and a data management strategy will push it into in-memory. Even if they want to add another year of data they can say “I don’t need any memory for the current now, so I can send it down to big data and let it wait there”. So the most advanced in-memory environments work on top of big data. They are connected so that if more data is needed a click of a button will give the capability to manage that. Question: Does in-memory computing induce any high risks in terms of security? Answer: Not at all, because the access to it is exactly the same with any other database environment. If there is a power outage then maybe the whole in-memory system could fail, but there's always a persistency on disc. In-memory usually runs on Linux machines, so it is the same as any other database system. For cloud data environments, all the vendors have very well in-built security models. A client can choose to leave everything open or shutting everything down to a machine that nobody can access except by that client utilising one specific laptop. Some cloud security systems are better than internal systems in individual companies. Question: If it is much faster to access in-memory data, does it make it sense to just hold the raw data or to pre-process it in anticipation of query requests for faster response to user queries? Answer: the only reason to pre-process data when working with in-memory computing is probably to have smaller data. If it’s already pre-processed, aggregated and created as a smaller data, then less in-memory space will be utilised. If it's just for the sake of trying to speed up things, to be quite honest, it is already quite fast by itself. At this point the main metric that makes in-memory challenging is the cost which can be up to $50/hour. FINDING OUT MORE You can view a recording of the webinar, on which this webinar was based. If you want to know more about the Internet of Things and the various components like those described in the above case study, then do please join Engineers Australia’s Applied IoT Engineering Community by registering on this website. We run a regular program of webinars exploring the applications, opportunities and challenges of these technologies.
  13. If you had $50 million to spend on making your city "smarter" what would you do. How do you think IoT could actually help make cities smarter? Just what is a smart city anyway? If you have a great idea, please share it. And maybe you should apply for funding. The deadline is 30 June 2017. See https://www.business.gov.au/assistance/smart-cities-and-suburbs-program
  14. until
    To Register: Like Engineers Australia's Facebook page and you will automatically receive notification of the live event begins (make sure your mobile & desktop personal notification settings are turned on). Title: The IoT big picture Presenter: Geoff Sizer, CEO of Genesys Electronic Design, immediate past chair of Engineers Australia's ITEE College. What you will learn: Examples of IoT application Exploration of the IoT Components Challenges facing IoT Description: In this Facebook Live event, Geoff Sizer will be discussion big picture questions around the Internet of Things. He will aim to make the subject real for people who struggle to understand what it is and its potential. It will also cover the major challenges facing the uptake of IoT. About the presenter: Genesys founder and CEO Geoff Sizer has a lifelong passion for electronics and technology, and an ongoing commitment to the electronics engineering profession. He has more than 35 years experience in electronic product development ranging from complex systems to simple consumer goods for a diverse range of industries and applications. Geoff is a Fellow of Engineers Australia and is a Chartered Professional Engineer. As a former President of the IREE, Geoff was instrumental in the formation of the ITEE College in Engineers Australia and is its immediate past chair. He has championed the formation of the Applied IOT Community of practice, and is the Community Leader for 2017. During his career Geoff has acted as a Director or Chief Technical Officer for several leading technology firms including Advanced Systems Research Pty Ltd, Advanced Spectrum Technologies Pty Ltd, EMC Assessors Pty Ltd, Telezygology Inc and Embertec Pty Ltd. When: 12 midday AEST (Sydney) on 30 May 2017. The discusison will last 30 minutes followed by questions from the live participants. Where: The presentation is by webinar. After registering you will be sent details of how to logon. Cost: This presentation is free to members of Engineers Australia (EA) and the public. How to register: Please "like" the Engineers Australia facebook page and you will automatically receive notifications when the live event begins (make sure your mobile & desktop personal notification settings are turned on).
  15. Personally, I think that we will end up with a mix of specialist sensors and more general ones that leverage cognitive computing and machine learning. The example of failures above misses an important fact about machine learning is that such systems are trained. yes they may get it wrong but over time they get better and better, and are usually more accurate than humans in the end. Also, such machines will improve over time. The latest announcement on Google Lens has great promise I think. I liked this example of cognitive computing where I think super sensors could be used. "One example of the application of cognitive computing in IoT is in health care for the elderly in their own homes (curtesy IBM). Asking the elderly to wear sensors is problematic because they may not raise an alert when they should or they alert when they shouldn’t and people stop wearing them after a while. An alternative approach is to instrument other things in the house such as fridge doors, light switches, bathrooms, movement sensors, and maybe infrared sensors etc. The cognitive software can then build up an understanding of what normal looks like. When something abnormal happens, the system can then raise an alert and make a call to the emergency services."
  16. Introduction Cognitive computing is an advanced form of data analytics that, broadly speaking, leverages the power of artificial intelligence and machine learning. With machine learning the computer learns from the data that it's monitoring. For example with unstructured data, a cognitive recognition system can be shown images that comply with, or do not comply with, the condition that we're looking for. The cognitive software then learns how to recognise the difference between them. The same is true for analysing more structured data. For example, when looking for patterns in sensor data that indicate potential failure, past data of failures is fed to the cognitive software which analyses it to discover correlations that is then used in rule based systems to detect and predict failure in the future. Cognitive systems stand in contrast to more traditional programmed systems, and proponents claim is it far better suited to IOT because the large volume and multivariate nature of the data. IoT data can be sourced not only from a huge range of traditional sensors (covering for example, vibration temperature etc) not just from the equipment itself but potentially from other sources such as weather data and even social media data as people reacting to conditions around them. This means the data has many potential ways of being analysed. IoT is not necessarily a case of knowing what question to ask, doing the analysis and coming up with an answers. Increasingly, there is so much data coming in that it is difficult to even know what questions to ask. Cognitive systems learn as they look at the data and they find insights the analyst may not even be suspecting. Examples One example of the application of cognitive computing in IoT is in health care for the elderly in their own homes (courtesy IBM). Asking the elderly to wear sensors is problematic because they may not raise an alert when they should or they alert when they shouldn’t and people stop wearing them after a while. An alternative approach is to instrument other things in the house such as fridge doors, light switches, bathrooms, movement sensors, and maybe infrared sensors etc. The cognitive software can thenbuild up an understanding of what normal looks like. When something abnormal happens, the system can then raise an alert and make a call to the emergency services. Another example (IBM working with Metronics in the US) is helping diabetic patients, manage their blood sugar levels. When blood sugar goes extremely low it triggers hypoglycemia and can cause a person to go unconscious or even potentially die. Cognitive computing capabilities are being used to detect patterns that indicate likely hypoglycemia two to three hours in advance, which allows the patient to take corrective action, eat some food, adjust the level of exercise and other things to avoid the hypoglycemia occurring at all. Drivers The growth of cognitive computing is being driven by many of the same factors as IoT in general, but in particular the growth in pervasive connectivity and cloud computing. The ability to process data in the cloud is bring many advanced analytical capabilities to bear on applications in the field, that previously was not possible. In terms of uptake, the growth of the IoT is allowing many organisations to consider IoT for the first time. As organisations install more sensors and instrument more things, they often getting more data than they have ever had before and are in a position to experiment and innovate. Challenges A challenge in cognitive computing is knowing what data to collect. Storage of data is not without cost and ideally there would be a systematic ways of determining what the sensible parameters in any given context may be useful for a cognitive system. However, it is dangerous to assume upfront what the data will reveal. A general rule may be to gather more data than less in the early phases when things are being learned and then pair back data collection when rules have been established. Fortunately, IOT data from traditional sensors is usually quite compact so usually the problem only arises when richer media is being used. Another approach is to use edge computing principles to push processing capability to devices like routers and switches and even hard-wired sensors, rather than have to have all the data sent across the cloud. Sources: Webinar by John MacLeod, Internet of Things Technical Specialist, IBM titled Engineering Jeopardy: How Cognitive Computing is being enabled by the Internet of Things.
  17. A new network around advance sensing technologies opened in February. http://www.nssn.org.au/single-post/2017/02/01/Smart-sensing-network-to-probe-pollution-human-health-wildlife
  18. until
    View the recording: This webinar has passed. The recording is available free for EA members on MyPortal. Navigate to Industry Applications > Utilities ----------------------------------------------------------------------------------------------------------------------------------------------------------- Presenters: Victor Polyakov, Managing Director, Tibbo Systems What you will learn: SCADA and Smart Metering systems will become extinct! IoT Platforms are already changing the digital energy landscape Smart Grid will not use cross-industry software platforms but rather dedicated software Description: How does Industrial IoT change the Power Engineering industry? What are the Things being connected and what cases does it cover? Answering these questions, the webinar is dedicated to the transformation of energy management influenced by the Internet of Things. About the presenter: Victor Polyakov started Tibbo Systems’ AggreGate project that changed its description over time from a Device Management Platform to an M2M Platform and later to an IoT Platform. His early career was in a Network Operating Centre with engineering duties in a telco company, TeleNET. Today Victor plays a key role in the AggreGate IoT Platform evolution, taking part in system architecture design and establishing long-term partnerships with OEMs and IoT System Integrators, Network Management, Industrial Automation, DCIM, Vehicle Tracking, and other engineering applications.
  19. Interesting article here, outlining an artificial intelligence driven approach to reducing the number of sensors we will need to realise the vision of IoT See https://www.cio.com.au/article/619586/google-rise-super-sensor/?fp=4&fpid=51241
  20. Thanks to Sherry Moghadassi FIEAust, Deputy Chair - Australian Society for Defence Engineering NSW at Engineers Australia Sydney Division for alerting me to this amazingly small IoT device from the University of Michigan in the US. https://www.eecs.umich.edu/eecs/about/articles/2015/Worlds-Smallest-Computer-Michigan-Micro-Mote.html and http://spectrum.ieee.org/semiconductors/processors/specksize-computers-now-with-deep-learning and http://transmitter.ieee.org/making-iot-smarter-micro-motes/
  21. Google has launched a new service called Cloud IoT Core. See https://cloud.google.com/iot-core/ Their press release is here: https://cloudplatform.googleblog.com/2017/05/introducing-Google-Cloud-IoT-Core-for-securely-connecting-and-managing-IoT-devices-at-scale.html
  22. The answer is "no" according to Schneider. See: http://www.schneider-electric.com.au/en/work/campaign/m580-epac/insight-article/industrial-programmable-logic-controllers.jsp
  23. Recording: This event has now passed. Members of Engineers Australia can view the recording free on MyPortal. Logon and navigate to Technologies > Interoperability. Others can purchase the recording on EA Books. Title: Building interoperability into IoT solutions Presenters: Daniel Pratt, Regional Sales Manager, Reekoh What you will learn: How to architect and deploy a scalable IoT Solution An overview of an IoT Platform Design considerations for an IoT Solution Description: Interoperability and integration are key to successful IoT projects. The sheer complexity of thing-to-thing relationships in large deployments is challenging. For large scale deployments, it is unrealistic to be writing thousands of different bits of code. A key to interoperability in IOT is to adopt a mindset of modularity. The more modular your IOT system the more likely those things are to be interoperable. In this presentation, you will learn how to use a modular platform to build interoperability into your projects from day one. About the presenter: Daniel Pratt works with clients on their IoT projects, helping reduce the complexity of integrating the many different protocols, products and services that present themselves. Before joining Reekoh Daniel worked in digital transformational roles throughout Australia, SE Asia and the UK. When: 12 midday AEST (Sydney) on 23 May 2017. The presentation will last 30 minutes followed by question time. Where: The presentation is by webinar. After registering you will be sent details of how to logon. Cost: This presentation is free to members of Engineers Australia (EA), the Australian Computer Society (ACS), the Institution of Engineering and Technology (IET) and IEEE. Just provide your membership number during registration for the event. The cost for non-members is $30. How to register: Please register on the Engineers Australia event system. Note, to register you need to have a free EA ID which you can get on the first screen of the registration page. Take note of your ID number for future events.
  24. A new service from Shodon See http://www.cso.com.au/article/618595/shodan-search-engine-launches-botnet-hunting-service/
  25. See https://www.itnews.com.au/news/telstra-centralises-it-product-dev-under-new-labs-460218
  • Create New...