A Taxonomy of industrial IOT Solution building


Industrial Internet of Things – IIOT – promises a lot of business opportunity. But nobody actually want’s IOT… or data… people want business outcomes! And that at a reasonable price.

Developing and operating a successful IOT product or service is difficult, due to many reasons. In order to be successful, we need to consider a set of challenges and we need to have good answers. In this article I will try to provide a taxonomy of the problem space of building IOT solutions. It will cover the different aspects from finding opportunities, working backwards from the idea to the question of how to get the data required down to how we can trigger actions and measure business outcomes. The goal is to provide a holistic view on the topic and provide some guidance how to approach the different aspects.


The base assumption here is, that IOT is used as a technology to aim for data driven insights, mainly by analyzing the data that was acquired from various sources. Of course, there are other use case where IOT technology play a significant enabler role but may not need the entire set of things that I will discuss here.

IoT-Taxonomy

The picture shows 4 major areas:

  • The business model and business case, considering different ways to generate opportunities and consider the potential cost and risk to build a solution
  • Features and Functions that are required in the most solution so that it technically works – from data acquisition to measuring the value of actions that were taken
  • Management and operations; functions that are required to operate the technical systems that run the software and store the data
  • External factors and constraints, such as data laws and regulations

 

Depending on the actual use case you want to address, there might be more easy or more complex answers to the questions that will be raised here. However, to some extent, all of the mentioned elements need to be considered. If not explicitly, they may sit in implicitly. Often times, building IOT solutions is underestimated because of the actual breadth of considering that would be required. I am not claiming that all the things need to be considered from day 1 in full extend, when you intend to build a business in IIOT, but I would suggest to at least build a strategy how to deal with these things to minimize the risk to land in a dead end.

Let’s walk over the 4 areas that I proposed.

The Business Model

[Edit: Please check by book recommendation on the business side, Bruce did a much better job then me to explain the business of IOT.]

Building a pipeline for business opportunities is the first step in every activity that somehow should help a business to grow or sustain. So we need to understand what we can do to structure the idea generation. Basically, this is not different in IOT than in most other technology bases businesses. I see two main approaches:

  1. Develop new ideas by analyzing the currently known problems and pain points. This is something that some companies do under the umbrella of a digitalization strategy. You go and analyze what could be better in your processes, production, ordering, invoicing, your product, your offerings and your customer engagement by reasoning on top of data that you already have (somewhere; in principle). What could be improved if we would be able to reason on top of data, assuming we have it at hand. Where could processes and products be optimized? Where can new businesses be developed based on data? Transformation of classic product sales towards selling e.g. operating hours or units produced (e.g. not selling a machine, but charging fees for each piece the machine produced; or not selling a wind turbine but charging per kwh energy produced) is one group of examples where, based on data that is emitted from machines, new business models are generated. A lot of after-sale business, e.g. service and maintenance, are data enabled as well. What if we would have more and other data? What can be done then?
  2. Develop new ideas by analyzing the data and using more advanced technology on the data. A data science person with the right tools might generate insights from a large data set that nobody on the business side would have thought of. Specifically, correlation of previously unconnected data sets has potential to generate a lot of chances. Also with maturing technology it becomes more and more easy (and cost effective) to analyze large amounts of data. Machine learning becomes a commodity tool, so that most organizations can leverage it more and more easily. Collecting data and searching for opportunities so is a valid approach to develop proposals into the business functions.

I think it is important to have both, since the side of the known business problems often leads to continuous but only incremental innovation, the technology and data driven side may lead to a disruptive innovation idea. In any case, when it is an idea around industrial IOT, you will have to answer two basic questions: Which action do I want to trigger in this case to allow any positive value generation for the customer and which data do I need for that when? These questions are around the source and the destination of all doing, driving value and action out of data. For each use case this may be different answers.

In addition, no matter which use case you got, the question on the solution cost will raise – today one of the killers of many good business ideas around IIoT. Cost for building and developing the solution plus cost to operate and maintain the solution plus the cost that sits in your risks (e.g. using technological bleeding edge components or unclear customer demand). In the classic cost / benefit analysis and ROI discussion, an expected cost plus a projected customer adoption leads to price building. And that price must be lower than the projected value that a customer would receive from the IOT solution.

An example: You want to collect data from automation equipment to do predictive maintenance. For that you need to build a solution by integrating several products on the marketplace, build a custom analytics function, integrate that with a service team management tool and operate that. So, let’s say, the solution will cost you X in creating it plus Y in yearly operational cost. Does X + a * Y equate to a lower cost value then the business value by the increased productivity (e.g. based on Z % of less downtime of the equipment) over a period of a years? Often times, this only renders a positive case when the amount of devices is high enough so that scaling effects apply. How does all that change when risks materialize?

As in any projects that requires software and data engineering and deal with rather new technology, the risks need to be calculated that

  1. Increase the development cost and time until the first release
  2. Increase the operational cost after the first release

There are numerous reasons that can lead to materialized risks, the one I want to address in this article is the risk to underestimate the complexity of a IIOT solution because some major elements where not considered. However, specifically in industrial IOT, there is a set of challenges that are not directly related to the technology and solution building but still build real high risks for successfully running IIOT businesses:

  • Unclear value preposition and business value. The above mentioned example of increased productivity vs. cost to build and operate is one example. End-to-end views on IIOT use cases need to consider the entire chain and must be able to formulate business outcomes and return of invest. This is often not done sufficiently, resulting in unclear value prepositions and so little adoption.
  • Incompatible data formats, protocols and other technological challenges. Industry systems have much longer live cycles than any consumer product. So having systems for 20 or even 30 years in production is not rare but more the normal. Many different vendors and many different standards makes it difficult to collect and integrate the required data. Also, often times, the data that is required is available in the depth of a company’s systems but the system do not expose it. Changes are hard to make because of the implied risk and things get more and more difficult.
  • Unsupportive governance rules and company structures. Exposing the required data, storing it maybe in a place outside of direct control (e.g. the cloud) and exchanging information is not always in the interest of all involved parties. To reach global optimum in e.g. a production line, in a factory, across factories or at enterprise level requires to actively exchange information. Old style governance structures, company policies and competing agendas of stakeholders so can be a source of hard to overcome obstacles.
  • External constraints like data related laws, regulations. This is a constantly changing set of constraints, I will cover this later in this document.

To conclude on this, it is important to build a good pipeline of opportunities. These opportunities then need to be analyzed in respect to their business value and the potential cost to build and operate the solution. The cost needs risks factors associated to it, which not only include the technical side of things but should also consider company internal obstacles and external factors that cannot be directly influenced.

Features and Functions

In order to build an end-to-end solution for an IOT use case that delivers value through actionable insights from data, you need to provide a solution that considers at least the 9 different problem domains which I like to explain here on a high level:

Data Exposure: How to expose the actual data you need out of the systems & devices? In Industry specifically I often see the problem that data sits in systems or devices that do not provide an interface to easily fetch the data. So, first you need to find the systems that have the required data, then you need to figure out how they expose it. Today, we assume data can be exposed easily, but not all products and system had that idea many years back. So, you might be faced with the situation that it is hard to get the data out. When the conclusion is that existing productive systems need to be changed (e.g. update their software) to enable the data exposure, the risk profile of the IOT project rises quickly. It might even be required to add new sensors to existing processes and systems because the devices in the field do not have the required data points. This requires actual hardware installations and wiring and so leaves the realm of software-only solutions. This can quickly become expensive, when thinking of thousands of sensors and required installations.

Data Source Connectivity – How to get the data out of those systems & devices via a network. Software people, specifically when used to web development, may assume that everything is TCP / IP (transport control protocol / internet protocol) or even HTTP (hypertext transfer protocol). That assumption is critically wrong when it comes to industrial IOT. Field devices and software systems speak many different protocols and have many different data representation and data model standards. In automation systems, even Ethernet and IP based communication was not standard for all times, older systems may not even have Ethernet ports but rather focus on local bus systems. One of the key challenges here is to get data that is exposed by devices and systems on TCP / IP based transport layer that speaks one of the more common IT protocols like MQTT or HTTP. A few years back, it was hard for an automation engineer, programming PLCs (Programmable Logic Controllers), to interface with higher level systems via TCP or even HTTP because the programming models and the core concepts of OT (operational technology) and IT (information technology) were fundamentally different. Today, things are much more aligned but when you need to get data from rather old system and equipment this can be a challenge. To add on that, internet connectivity and things like local DHCP or DNS support is something we also should not assume as granted in industrial environments. Even though technically possible in many ways, company policies may deny direct internet access of automation equipment for security reasons. So, in a concrete case you need to check if the data source systems have connectivity on 3 layers: IP connectivity with establish routes to the systems that should receive the data, transport protocol connectivity (e.g. MQTT or HTTP) in a way that firewalls allow the ports / protocols, and finally making sure that both sender and receiver speak compatible protocols and use compatible security standards like TLS (transport layer security) or certificate based authentication. One example for the security standard issue is this: A device that acts as a data source may not be able to store or maintain digital certificates that are required for encrypted protocols like HTTPS using TLS. So, these devices may not be allowed to talk to central systems (e.g. cloud based) due to security reasons. Another aspect of this is device onboarding and protecting the central data storage against “bad” devices. I discuss this later in the operational functions.

Data Transfer – How to move the data to a place where you can analyze it, assuming you cannot do it all in the place where the data sits in the first place. IOT use cases move data from more remote/distributed systems and devices to more central systems and devices. While this may happen in multiple steps (e.g. cell à production line à factory à enterprise), in each step you need to answer the question how the data transfer is done and to which part of the overall data it applies. When the fundamental issues around connectivity are solved, see above, you need to consider for instance the available bandwidth and the cost of data transfer. Specially in high volume, high velocity cases, data transport can be a huge cost driver for operational cost and so needs consideration. How to select the data that is sent, how to compress data to minimize size on the wire? In principle there are 2 options when it comes to analyzing the data, which is a requirement to gain actionable insights from data: You can bring the analytical function to where the data is and / or bring the data to the analytical function. It needs consideration which is more efficient in terms of data transfer and storage cost to bring data to a central system or bring executable code and analytical models down into the device level, which adds a lot of complexity on the solution and requires compute power. There are many strategies to reduce the amount of data that is sent, for instance: filtering, sending only selected data based on filter rules; lossless compression like ZIP compression; compression with losses but much higher compression rates; aggregation by e.g. calculating sums and averages over a time frame; sending calculated values only and more.

Data Integration – How to pull a common understanding of the information on top of the collected data and how to make automatically processing of the data possible? When it comes to data analytics, the problem of data integration and data cleansing still is a major cost driver. Data from various sources need to be integrated in a way that algorithms and rules can be defined that can work on the data in an automated way. Machine learning can ease the problem in some case, adding the ability to interpret data without a predefined set of rules, but still data needs to be digestible by the algorithms. Data from different source systems may be presented in different technical formats, in different data model representation and different packaging. Some data is provided in e.g. binary files with a row-column format, other data comes in JSON documents or XML file and so on. There is a high variety of options. So first, software needs to parse the data and bring it to a common format. Several tools exist in the software market that were specialized around this problem, often referred to ETL tools (extract, transform, load). Beside the technical representation of data, the semantical integration is even more complex. Meta data is required that describes the meaning of the actual data, so that data from several sources can be related and one can create meaningful queries. Simple things like the question about units (is this temperature in Fahrenheit or Celcius?) but also more complex things like the similarity of data description (e.g. does the column ‘prodID’ refer to ‘product’ in this other data set?) come into scope. You will find a large selection of products and services that help here, but still the task shall not be underestimated. And it is continuous efforts when adding new data sources and data types into your analytics over time. As part of the meta-data management challenge comes the problem of identifying the original source of data. When moving data from original data sources, e.g. a specific field device, while using different stages and protocol transformations on the way (example: Sensor à PLC à Gateway/Edge Device à Cloud Messaging Service à Data Landing Zone) it may not be easy to understand what the original source was, when just looking at the data set. That may require that such meta data is added to the data at some point and the central data store keeps that information for further processing.

Data Store and Stream – How to efficiently store large amounts of raw data, cleaned data, processed data and calculated data? How to move large data sets between components? Storing the data is often wanted and required for 2 reasons: 1) analytical tools only work on data that was stored in certain technologies. A few years back, it seemed that Hadoop file systems are a standard case for this, with an ever growing set of tools that can work on the data in the Hadoop file system. Today we have much more options. Databases, file systems and object stores exist in many different flavors that allow to find a good balance between available tools that can work on the storage system, cost for data storage and performance for ingest and egress. 2) keep the data for later. Here, later means, that a data scientist or and advanced algorithm might want to use the data someday in the future to maybe find new insights. Data that is just stored has no value, so data that is not queried should only be stored with a focus on cost minimization or be purged directly. So, you need to find storage concepts that consider these elements like data velocity, data query requirements, storage cost and data volume but also data variety to make sustainable decisions. With the emerging trend of building data distribution platforms, you also get requirements around meta data management, data access management and data distribution. Managing large amounts of data (also referred to as Big Data) is a challenge of its own and needs specific consideration. Beside actual data storage, data streaming needs to be considered to move data between systems and / or devices. There is a fair amount of system designs out there, which analyze data while it is moving, not storing the mass data for long term persistence. Of course that depends on the business needs.

Data Analytics – How to make use of the data to generate insights? Data Analytics can be a complex topic, depending on the size, volume and diversity of data that you want to analyze. Sometimes you need to develop a concept how to analyze the data based on the desired outcome. Sometimes you need to curate, clean and modify the data before analysis is even possible. So, there is a large set of tool, techniques and technologies available to analyze the data you have collected. One of the key question is when analysis should occur on which data set. So, for instance do you want to analyze data once new data points are available in a stream of data points. Or do you want to analyze it in regularly intervals in larger batches. This is referring to the “Lambda Architecture” concept, or even the “Kappa Architecture” where everything is handled as a stream. Another important decision is where you run your analysis code. Data might be available in high density in a field level device or an edge gateway, while not all of that data might be uploaded to a central instance. Other data might only be available in a central instance, like when you want to correlate data from multiple devices, sites, plants and similar which you only can do at a central data lake or point of storage. With this you need to consider a distributed analytics architecture, managing and matching data, algorithms and query engines with available compute power and bandwidth. As a result, you want some outputs from these analytical processes that can be used to influence any downstream process. So the outputs must be processable by machines or humans and must have a meaning. In real cases, it pretty much depends on the analytical problem which technology you might use. From classic SQL Queries on databases engines up to machine learning based services that you train to build a model and run inference at new data. The output depends on what you want to achieve, from basic calculated KPIs up to predictions and recommendations. There are several maturity models for this area, indicating that in the basic step you analyze what has happened in the past based on collected data, then you move to live analysis of what happens right now, so that you get enabled to react to insights more timely. Then maturing towards predictions on what might happen so that you can plan ahead and ultimately aiming at a situation where the system provides recommendations that you might automate to be approved and put into action. Analytics can also be done by humans, allowing people to visually explore the data to “find” interesting correlations and discover insights before you automate this.

Insight Presentation – How to show the insights to the people that can digest it in a meaningful way. Depending on the value you want to create, you need to consider how to make the insights that were generated by the analytics step visible and actionable. Creating PDF reports and showing visual dashboards is certainly the first step. Here, you need to deal with technology for visual data display, data exploration and reporting. This step is crucial and you should work backwards from there to determine which analytics are required and which data in turn is needed to generate the insights. The insight presentation can be very specific to the target user group in the exact format they need to have it in the right context. A CEO who is interested in basic KPIs of a production facility will need a different insight presentation then for instance a maintenance service engineer who is interested in predictions around the failures of field equipment. The generated insights so need to be put into context of the user’s responsibility and interest. One dimension here is the timeliness of the information. Is it good enough to generate a PDF report once a month or do you need active emailing or paging of people once an insight was discovered, e.g. considering alarming and events generated by the system based on data and analysis in near real time. This also determines which technology and services are required.

Action Trigger – How to automatically trigger activity that makes use of insights? Beside informing humans to allow better decision making, the next step is to automate action triggers. One example is to e.g. create a service request case / ticket in your service organization when the analysis generated a warning for a device and needs maintenance. In this area it is all about technical integration with downstream systems. Some more examples: Order spare parts in a SAP system, create service tickets in ticket management systems, create tasks in task management systems, report alarms into SCADA system, start business workflows in workflow management systems, … The list can be long. Important is that the downstream systems allow an integration and so provide APIs that can be used from the IOT system side. As usual, there needs to be network connection to the downstream system, authentication & authorization needs to work and the APIs need to be used on the IOT system side. When all this is given, you could implement automatic actions. A step in between full automation is semi manual approval. You could create a system that provide suggested actions to a human user for approval, once the human worker approved, the automation goes on. This can be helpful to gain some trust before skipping the human approval.

Field Feedback – How to measure that an action was executed and what business impact was generated. Was the analysis and generated insight helpful or even correct? It would be fatal to trust analytical results, machine learning predictions or even recommendations without measuring the actual business impact. So, in an optimal world, you would have data and KPIs on your environment before and after you brought an insight into action. First of all, this needs collection of the required data, as before, and then putting that into the context of the analytics insight that you actually brought to action. Optimally, the new data is feed back into the learning of analytics and machine learning so that the algorithms and queries can be adjusted, closing the feedback loop.

Depending on the concrete use case you may need the one or other feature more, others less. But most likely you have to deliver functionality on the entire chain; to some extend at least. Beside the pure functional view point, do not forget to think about operational qualities of these features: Security, robustness – specifically dealing with connectivity issues, performance that matches data volume and velocity, operational cost and developmental quality for efficiency in further enhancements.

Consider: it is one thing to develop the solution and get the first version (e.g. MVP – minimum viable product) up and running. But considering a live time of such a solution over multiple years requires an effective operation and evolution of the solution as well. Continuous cost adds up easily to volumes that multiple the cost of developing the solution. Automation can help to reduce labor effort, but still that automation needs to be implemented by someone.

Management Functions

In order to collect data in the field, transfer it, process it, analyze it and so forth, you need to develop and/or integrate software and then operate and maintain it – at the field level. In industrial IOT this is maybe the most underestimated area in terms of cost generation and effort required. IOT solutions are distributed solutions that require working software on many different levels: In the cloud or a central system where data is stored and big-data analytics are executed, on a fleet of edge devices, on the actual data source systems – at the shop floor or in IT data centers…  All this software needs lifecycle management. If you deliver hardware for delivery and installation in the field (e.g. data collectors or edge devices that are installed in customer facilities) as well, then the entire hardware needs a full lifecycle support with enterprise grade quality. Here, the question of service ownership plays a large role. System integrators and solution providers may integrate several hardware and software module and it needs to be defined who will provide service and support for the individual components. Specifically, all the devices and the software in the field needs special attention. When you have to send service engineers out into the field to fix devices, distribute software or do replacements, this becomes really expensive. So automation of the service and maintenance processes is required to keep the operational cost and customer friction at a minimum. At the same time, this automation needs to be implemented with security measures that make sure that customer production processes cannot be negatively impacted, e.g. by a bad software update that was automatically rolled out. In industrial use cases I see high demand on putting security controls around the automated procedures to avoid any negative impact of the IOT related data collection processes to the primary production processes, which can build a complex set of requirements, specifically in very conservative, security sensitive or regulated industries.

IOT solution creators have a large space of solutions to build, integrate, run and operate the SW that is required on floor level to collect data and populate automated actions. You can pick any computer hardware that fits the environmental conditions (e.g. ruggedized) and then install all required SW into it manually. This is cool for trying things, but for production it will require a complete service or product that manages the HW + SW on all layers over its live time.

For the following features, as a general thought, consider what it means to do each of the steps for 1 or 2 devices in regards to what it means to do this with 10.000 devices. IOT use cases often become valuable when they are applied at scale, so scaling the operations for field device software is a crucial thing to have.

Device Onboarding: How to get a field device (e.g. edge computer) securely commissioned? Please see also my other article called “how to deal with industrial devices in IIOT”. A piece of SW needs to be installed into a customer’s network, either on an existing device that can run the SW or by suppling a dedicated piece of hardware, e.g. a device specifically designed for edge use cases. Here, network security is important – it needs connectivity to the production systems that hold the required data (which can be deep in your protected networks) and it needs connectivity to where the data should be pushed to and where the management SW for the devices reside – in many cases a cloud based system. This alone is a discussion point in larger enterprises who are sensitive to security. Then the device needs to be registered at the management layer and data ingest layer of your central system. You actually want to protect 3 things: 1) the customer’s data, 2) the customer production facility and the production process (non-invasive IOT) and 3) your central management and data storage & analytics system. For 3), you want to make sure that only legitimated devices are able to connect to your backend, which implies things like certificates that need to present at the device. Or you have temporary onboarding tokens that allow a commissioning engineer to establish the link between the device and the central system. In all cases, you need a concept to secure the onboarding and the continuous operation to reduce risk of DDoS attacks from compromised or fake-devices, data theft and other scenarios. At the same time, and here comes the challenge, you want to minimize manual effort for device onboarding because it is pretty expensive to have service engineers travelling the world and do manual work on site. So automated device landing zones are a concept, but they are hard to design securely with no manual on-site effort required. Multiple concepts exist, but the general rule is – the more secure and the less manual effort in the field – the more complex the solution will be (and so more expensive).

Device Lifecycle Management: How to manage the entire lifecycle of hardware and software that is part of your offering at industrial quality? In the smallest possible footprint, you would only provide small pieces of SW into an existing system to start collecting data, or even no SW at all, when the existing systems are able to provide the data that is required directly. But typically that is not sufficient. Some use cases will demand to install new sensors, computer hardware, gateways and similar, so actual new hardware. Plus, the SW required to collect the data and manage the uploads with required security and resilience. For all the new things that are put into the field to enable the IOT use case, you need to have a solution to manage the lifecycle. For hardware it means suppling spare or replacements parts with the required services around it. For software it means software inventory management, update and patch management and controlling the software landscape over your IOT fleet in general. Another dimension here is to control and manage the device health, specifically providing tools to diagnose and heal faulty devices. Certificates replacement is one case but also general problems like stuck or crashing applications, issue diagnosis and general health monitoring are important cases to consider to be able to respond to field problems.

Device Software Management: How to manage software updates and inventories in an industrial environment? Like in the office world, where you want to manage SW rollouts to large fleets of desktop computers, in IOT you want to manage SW rollouts to large fleets of edge devices, data collectors and similar. You need to be able to patch critical security vulnerabilities on all layers (Firmware, OS, Application) in short time with protocol and proof of the process. You need to be able to ensure that SW in the field does not harm the customer network or the production process nor does include any threat for the customer data, so securing the SW repositories and the download process. You also want to aim for zero downtime scenarios with rollback strategies so that you do not loose data because of downtime of your data collector. Also, over time, you need to be able to bring new functions and features down into the field devices, e.g. adding local analytics, compression, KPI calculation and such. And of course, comply to enterprise policies and governmental rules for e.g. export control and SW delivery.

Data Management: How to deal with Terrabytes and Petabytes of Data over time in a cost effective way? The big value in IOT comes from insights that is generated out of continuous data analysis. Most companies, however, do not purge data after it was analyzed once, but have to deal with a quickly growing amount of data. Data management is required to keep cost for long term storage of data reasonable and at the same time keep usability of the data in balance to cost optimization. This includes data catalogs, that tell where which data can be found, implementing data aging strategies to move data from expensive high-performance storage to low-cost storage and implementing rules for data purging over time. Another dimension is around managing the access rights and control data access in general on an ever growing data set. There are some challenges around these topics that become more material the more data you store.

Infrastructure Operations: How to allow high operational quality and provisioning of required resources over time? For all the different system parts in your IOT solution you will need some infrastructure and platforms to build your solutions on. Network infrastructure, storage services, databases, analytics engines, stream engines, messaging engines, machine learning tools, container systems, authentication and certificate management, logs and alerts, … there is a lot to mention. This infrastructure needs to be operated. Here, you have always the options between buying, making & renting. The preferred option for most customers would be to consume all required infrastructure as a service, paying only what it is actually used. Here cloud infrastructure is the primary choice because you might get a large portion of the required infrastructure as a service. 3rd party vendor software can be brought in but will need integration and licensing to be clarified. Whatever choice you will make; the operations of the infrastructure will play a significant role in the operational cost equation.

Solution Operations: How to manage the software stacks in the different areas of the solution, from cloud micro services over edge services to in-device software? Operated 24/7 and continuously improved? Since the infrastructure that you have chosen will not provide a full solution for your case, you will need to engineer the glue and integration between the consumed 3rd party products and services. Additional you will add custom application code, custom SQL queries, specific machine learning, customer user interfaces and 3rd party system integration on top of it. As in all SW engineering efforts, these elements need operations as well. SW product lifecycle management and operation also are elements that will play a significant role in the operational cost equation. You need to implement processes that focus on continuous improvement and evolvement of the solution, that enable you to respond to operational events (e.g. system crashes, security events, …) and to support customers with a contractual SLA.

Data Laws and Regulations

Last but not least, you need to consider how to deal with laws and regulations. For instance, how to deal with the fact that data that is extracted from factories and other industrial places may be subject to governmental regulation and e.g. must not leave the country.

How to make sure that SW updates you deliver from one country into devices in another county comply to export control and customs regulations? This also needs to be implemented in software and processes to reduce the operational risks of being non-compliant.

While data related laws are emerging, e.g. GDPR, we can expect more data related regulation over time. In cases where the data sources sit in the same legal environment as the data storage and analytics sits, things may be manageable. But when you consider to move data and/or software across borders, you need to be thoughtful on this topic – it can kill your business case from a legal point.

Conclusion

Building industrial IOT solutions is not easy, specifically because often times data is not available as needed, data formats and equipment are not interoperating across vendors and you need to consider a large set of features for function and operation. Many analytical problems are unique to the actual business problem you want to solve, so that lots of engineering is required, which drives the cost for creating the solution.

With more and more solution providers and products that address some elements of the chain, the engineering efforts required will shrink over time. Sending data to the cloud, storing and streaming as well as analytics tools and even machine learning are commoditized to a large extend and more such features will follow. Even larger industrial platforms emerge that try to address the entire problem space at once and make it easier to build solutions. Startups emerge that provide IOT solutions as a service and even promise no-code solution building. So, in the near future it will be possible for everybody to generate value out of industrial data.

Today, however, the cost associated to building and operating customized IOT solutions is still relatively high and business cases are hard to turn into positive ROI. What helps are evolving standards, better technology and products that take care of the automation. Scale also helps, implementing use cases that can be rolled out to many factories, devices, utilities and so on, while using one technical IOT solution will scale the value and compensate the associated cost better. It is crucial to stay on top of the developments and check feasibility and cost levels to unlock the value that sits in your data.

The turnaround point will come soon.

Hoping you find that useful,

-M

Categories: Internet of ThingsTags: , , ,

1 comment

  1. Cool article. Thank you for sharing!

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.