Architecting the IoE (Internet of Everything) with 6D Model Leveraging DePIN
- Romeo Siquijor
- Dec 2, 2023
- 7 min read

What is the Internet of Everything (IoE)? According to Cisco, IoE is the interconnectivity between people, processes, things, and data. But I’m not 100% convinced with this definition, so I’m proposing to redefine it as “The intelligent interconnectivity of people (including all living things), processes, and technology (including all non-living things and data).”
If we genuinely want to talk about architecting the Internet of everything, we need to be inclusive of all things — both living and non-living. For instance, I don’t see animals in the scope of the current definition. In this sense, it leaves innovations like BarkGpt (https://www.bark-gpt.com) behind.
IoE has been the focus of CES (Consumer Electronic Show) in the past few years. But why? People are in desperate pursuit of making inanimate objects and less intelligent things smarter and interoperable. Today, we use smart TVs, smartphones, smart cameras/CCTVs, smart bulbs, smart speakers, and smart plugs everywhere. Smart refrigerators and smart toasters are already out there, yet there’s no massive adoption. We also put sensors to detect motion, noise, air quality, quantities, temperature, speed, revolutions, and angles in places beyond one can imagine. In 2019, CES gave the Trendforward Innovation Award to Kegg, a smart female reproductive organ sensor. It tracks the fertility levels in the fluids down there to check a woman’s readiness for procreation and plan their pregnancy with confidence. And guess what? They were sold out on the first day of launch. Wow! I don’t know about you, but that’s the last thing I would ever think of putting a sensor on. It beats me, but we live in a weird technological era.
Last week, I gave a keynote in Chicago at the Apex Transformation Assembly, and I talked about “How to Architect the Internet of Everything with Six Dimensions Data Transformation Model, Leveraging DePIN (Decentralized Physical Infrastructure Network).” The focal point of my talk is data transformation and why “data” is considered the new gold and oil of the 21st century. And finally, I discussed why data is the crucial ingredient in building the foundation of IoE.
Law of Conservation of Data:
Data is neither created nor destroyed; it only changes its state and form.
To illustrate my theory, I showed the evolution of how we capture global temperature anomaly datasets in the last 173 years. Before the invention of an accurate temperature-measuring device in the 1880s, we measured temperature anomalies using proxy data, like the thickness of the rings in the tree trunks. Then, we started to use a throw bucket and measure them with a Lofi thermometer from the 1880s to 1940s. Then, we funneled water into the ships to cool the engines and measure the water temperature at the same time. In the 90s, we started using buoys, which have become the de facto standard for capturing seawater temperature. Today, we have more than thirty-two thousand weather stations, satellites, sensors, and other more sophisticated and accurate temperature data-gathering devices.

But what is temperature anomaly, and why is it important to measure and analyze? Temperature anomaly is measured by combining the combination of sea temperature and land temperature per grid coordinates on Earth. The variance year-over-year is what we call the temperature anomaly. But why is gathering this temperature anomaly data important? Well, it impacts everything in our lives and the sustainability of life on Earth. According to NASA, if we cross the dangerous tipping point of a 2 degrees Celsius temperature anomaly, then sea levels will rise, 70% of marine life will die, polar ice caps will melt, there will be great floods, and salinization of water will impact agriculture and livestock (resulting in significant economic turmoil) — and as a consequence, continuity of life on Earth is at risk.
Data is neither created nor destroyed; it just changes its state and form. In the video from Berkeley Earth below, you will find the product of 173 years of data collection, processing, analysis, and visualization in geospatial information under 2 minutes. Cool! But, the question is — so what? So what are we going to do about this information? To me, this empirical data is something we cannot refute. Earth is getting warmer, and life on this planet is at risk. The data is speaking to us. But what are we going to do about it, other than claim that it’s fake news?
At the core of my presentation, I introduced the concept of “The 6 Dimensions Data Transformation Model.” I believe this data governance model fits the need for speed, agility, and security demands of the Metaverse computing era, where everything must be fast, agile, secure, and ubiquitous.
The 6D Model has 3 facets (i.e., Sensing, Analysis, and Visualization phases). These phases depend on where data is in those three stages. The 6D Model suggests that sensing must be edge-based, big data analysis must be cloud-based, and visualization of outputs must be ubiquitous (accessible anywhere, by any device, at anytime).
Edge-based sensing — the Model takes advantage of decentralized solutions such as DePIN (the Decentralized Physical Infrastructure Networks) and blockchain or any form of distributed ledger technologies to act as data-collecting stations and local recording agents. They write and record data blocks into the chain through satellite beacons or nodes — acting as canals before streaming them into the cloud-based data ocean.
Cloud-based big data analytics — is where the collective brain of the Internet of Everything plays a significant role using cloud, data analytics, and decision distillation with AI and ML.
Ubiquitous Data Visualization — data consumption and visualization must be accessible using any device. Gen Xers may use PCs, Macs for Millennials, Smart Phones for Gen Zs or Gen Alphas, and Metaverse VR for Metagens.

The 6D (Six Dimensions Data Transformation) Model:
1. Decentralized data gathering using DePIN (Decentralized Physical Infrastructure Network) — this stage takes advantage of IoT sensors and wireless interconnectivity solutions to collect datasets. In this space, many crypto-based projects such as Solana, IOTA, IOTX, Flux, ICP, Filecoin, Helium, GEODNET, Vechain, GTRAC, and many other upcoming solid blockchain-tech infrastructure solutions are going mainstream. By 2024 and 2025 DePINs will rise; as NFTs in 2020–21. Unlike NFTs, DePINs have real enterprise use cases that will bolster Industry 4.0, enable borderless supply chain models, revolutionize wireless connectivity, introduce blockchain-based cloud systems and interplanetary file systems.
2. Decentralized logging using DLTs (Distributed Ledger Technologies) — this stage is where Blockchain, Directed Acyclic Graph (DAG), and other DLT solutions show their power. The most bullish layer 1 DLTs are Solana and IOTA. Many major DePIN-focused projects (like Helium and IoTeX) have recently migrated or at least built interoperability with Solana because of its speed at ~4,500 TPS (Transactions Per Second) versus ~16 TPS of the Ethereum network and ~7 TPS of Bitcoin. Conversely, IOTA is an open-source distributed ledger and cryptocurrency designed for the Internet of Things (IoT). It uses a directed acyclic graph (DAG) instead of a blockchain to store transactions on its ledger. DAG has a higher scalability potential over blockchain-based distributed ledgers. Both solutions take advantage of the immutability/non-repudiation mechanism, tamper-proofing, and offers public transparency. And in the IoT world, DLTs’ capacity to store and pass only curated on-chain data vs. heavy off-chain data will be a crucial differentiator in the long run. Filtering the noise and capturing the relevant signal is key in data processing so we don’t waste computing resources and spend too much energy.
3. Data Streaming to the cloud — big data analytics must be in the cloud. Data streaming allows for relevant on-chain data signals to flow to the distributed ledger system while funneling chatty and heavy off-chain transactions (traditional databases) to flow to a cloud-based repository where big data processing will take place. Combining on-chain and off-chain datasets will offer new avenues to forecast, predict, or prescribe actions in many use cases taking advantage of data corelationships between these data sources and other econometric factors.
4. Data framing — this stage is where the rubber meets the road. Of all the steps and phases of this 6D Model, this is the part where human intelligence and intervention are still required. The data scientists, quants, and data engineers still need to analyze what data points in the datasets to pick to build the right and the most accurate model to forecast, predict, and suggest actions based on data-driven analyses. Although AI and machine learning can automate much of the data analysis process, providing the ability to process vast amounts of data quickly — and often more accurate than humans, they still lack common sense. AI-based analytics can identify complex patterns and provide predictive insights that are not readily apparent through manual analysis. AIs at present are still prone to algorithmic biases. They cannot yet learn to think outside the box of what they are programmed to do. While AI can learn over time with pre-fed data and past experiences, it cannot be creative in its approach and go beyond what it was programmed and trained to do.
5. Decision and Knowledge Distillation — to address the shortcomings of AI and ML I mentioned in point 4, the 6D Model suggests the Teacher-Student decision distillation machine learning technique. By using this model, students or child systems receive pre-trained models inherited from their teacher or their parent systems, allowing them to discern based on situational analysis. Once a trained model from the teacher is passed to the student, it calculates, based on specific parameters, how a teacher or a parent would discern and act on the situation and conditions presented. This step is vital in making big data processing more efficient. In conjunction with cloud systems’ supercomputing power and capacity, and soon, quantum computing solutions (like those of Amazon Braket) this technique will give rise to sentient AIs. The 6D Model also suggests with caveat, a function that lets students and children modules make their own decisions based on new data presented to them without necessarily being aligned with the task at hand, but prioritizes value according to their ethics and morality sub-routines.
6. Display ubiquity — refers to how one consumes data and information on different platforms, anywhere and at any time. Some would prefer to see the data output in an informational chart on a PC or Mac, but some would like to see them as a video stream in the Metaverse using a VR headset. But in whatever form data and information are presented, we need to use them to make intelligent decisions and to act accordingly.
One final note about this 6D Model — if we have data, let’s look at them and make an empirical intelligent analysis to make data-driven collective decisions, but more importantly, build a consensus to take collaborative actions to evolve and improve everything around us. The main challenge of our society at this point, is that everyone has an opinion, data-driven or not. Democracy is good, but the street democracy goes — “If all we have are opinions, let’s go with mine.” W. Edwards Deming said, “In God we trust, all others bring data.” And I’d like to add that, “Not acting on what data tells us, is worse than not having data at all.”
Presentation copy in pdf:
_______________________________________________
About the Author:
Romeo Siquijor is a tech mentor, civic leader, author, and keynote speaker. He has over 30 years of experience in the IT industry. He is a thought leader in IT leadership, emerging technologies, tech-for-good, and human-centric designs.
Commentaires