10 years ago, I wrote about how standardization enhances the understanding of grid data. The evolution of A.I. tools has now taken this potential to new heights. For an updated version of my original article, check out the link below: https://lnkd.in/g6Uj7q8f
Dr. David A. Bishop’s Post
More Relevant Posts
-
Check out Dr. David A. Bishop's latest article, where he dives into: 🔹 How IEC CIM and MultiSpeak create the foundation for AI excellence. 🔹 Applications of AI in diagnosing, predicting, and improving operations. 🔹 The road ahead to achieve smarter, resilient, and sustainable systems.
As a Leading Digital Transformation Expert, I help Companies achieve Greater Agility and Market Share
10 years ago, I wrote about how standardization enhances the understanding of grid data. The evolution of A.I. tools has now taken this potential to new heights. For an updated version of my original article, check out the link below: https://lnkd.in/g6Uj7q8f
Using IEC Standards for Big Data sense-making in Smart Grid and AMI
Dr. David A. Bishop on LinkedIn
To view or add a comment, sign in
-
You do not have to worry about losing your long-term data with datumpin. We provide a searchable archive powered by edge computing that enables you to view events leading up to machinery failure. This gives you the ability to pinpoint the issue and take corrective action. You can also visualise long-term trends in your equipment degradation with data analytics tools. Visit the website to learn more: https://bit.ly/3R1MTi8 #SearchableArchive #SetpointTracking
Setpoint & Alarm Tracking - datumpin
datumpin.com
To view or add a comment, sign in
-
Open Source Innovation Comes to Time-Series Data Compression... Advanced Time Series Compressor (ATSC) represents a significant opportunity to optimize storage costs while maintaining analytical capabilities.
Open Source Innovation Comes to Time-Series Data Compression
https://thenewstack.io
To view or add a comment, sign in
-
9.Balancing Data Consistency and Performance in Distributed Systems Maintaining data consistency across distributed data sources comes with the cost of performance overhead. There is a trade-off between data consistency and performance, and several data consistency models help balance this relationship Strong Consistency : This model ensures that each read operation returns the value of the most recent write to the same data item from any node in a distributed network. All writes to the leader must be synchronized across all nodes, but this write synchronization adds significant performance overhead. Eventual Consistency : In this model, all writes to the leader will eventually be replicated across all replicas, removing the overhead of real-time synchronization between replicas. However, this may result in temporary inconsistencies. Causal Consistency : Writes that are causally related will be seen in the same order, but there is no guaranteed ordering for unrelated events. This model strikes a middle ground between strong and eventual consistency. We have two famous theorems to understand the relationship between consistency, availability, and performance in distributed systems: CAP Theorem: The CAP theorem states that a distributed system can guarantee at most two out of three properties: Consistency (all nodes have the same data), Availability (the system responds to all requests), and Partition Tolerance (the system continues to function despite network failures). During a partition, you must choose between Consistency or Availability. PACELC Theorem: The PACELC theorem extends CAP by stating that during a Partition (P), you must choose between Availability (A) or Consistency (C), but Else (E) (when no partition exists), systems trade off between Latency (L) and Consistency (C). This theorem highlights the trade-offs under both failure and normal conditions. #DistributedSystems #DataConsistency #SystemDesign #CAPTheorem #PACELCTheorem #PerformanceEngineering
To view or add a comment, sign in
-
The Modern Data Products are Programmable
The Modern Data Products are Programmable
medium.com
To view or add a comment, sign in
-
The latest update for #Grafana includes "Snowflake data visualization: all the latest features to monitor metrics, enhance security, and more" and "6 tips to improve your Grafana plugin before you publish". #dashboards #monitoring https://lnkd.in/dZy648x
Grafana
opsmatters.com
To view or add a comment, sign in
-
Underutilization is a thing! Short-term solutions in data centers can often create more problems than they solve, leading to underutilization of capacity with severe environmental and financial consequences. Check out this blog to learn how advanced management tools that provide comprehensive data analysis and predictive modeling can help managers proactively address these issues before they escalate: https://ow.ly/uCng50SKxsQ
Short-Term Fixes and Data Center Underutilization
community.cadence.com
To view or add a comment, sign in
-
It's negligent to assume the relative speeds of processes in asynchronous system. There is no guarantee that messages will be delivered in a bounded time or a particular order. And that is just as true in consensus problem. Only with asynchronous model, it's not always possible to achieve 100% accurate synchronization. The better assumption is some of the process WILL fail, and your system should be geared up for that. For that, we need to have the notion of "timing" which will allow us to use timeouts. And from the timing and timeouts, we build abstractions for distributed systems such as leader election, consensus, failure detection. For example, "quorums" only describe the blocking properties of the system to guarantee safety - meaning that the system waits until sufficient amount of votes are made. Bringing up to a service level application. The question is, is it possible to remove all the synchronous call(say, point-to-point HTTP/gRPC calls) in micro-services? The realistic answer to that is No. Some people may argue that we should anyway remove the dependency between services and best way to decouple them is to have asynchronous mechanism such as message queue - Who would disagree. But then the actual problem is having to have the awareness of data from different source(or service) in and of itself. For that, you may consider: 1) duplicating data using the aforementioned asynchronous mechanism 2) making a point to point call which is synchronous The main criticism over the first approach is how close the data that you've just duplicated can be to the data in source of truth - sooner or later this data will definitely be outdated unless that's the purpose of data such as transaction logging. From the data perspective as well, why does your system have to store the data it doesn't generate? To top it off, the worst case scenario is, some random developer comes along and have the entire system depend on the data when source of them is else where.
To view or add a comment, sign in
-
How many 𝗯𝘆𝘁𝗲𝘀 are there in 𝟭𝗞𝗕 of data? A. 1000 B B. 1024 B If you answer "A", you are correct, but if you ask IEC standard or JEDEC Memory standard, then you've earned my respect. Many engineers often misused the prefix when quantifying data. This confusion is derived due to lot of them is taught that in binary a kilo is 1024 instead of 1000. However, this is a widespread misconception as this was supposed to be an approximation instead of definition. Back in the old day, computer engineer found that it is really convenience to use 2¹⁰ (1024) to represent a kilo (1000). This approximation allows engineer to convert any value to kilo prefix simply by shifting the binary to the right 10 times. This approximation become widely adopted in the field of computer and memory engineering, and the term binary kilo which represent prefix of 1024 is born. However, as demonstrated having multiple definition for a prefix could be confusing. Therefore, the new prefix Ki is defined by IEC to address the misused of the SI prefix in the binary system. According to IEC 80000-13:2008, 1 Kilobyte (KB) represents 1000 B, while 1 Kibibyte (KiB) represents 1024 B. However, in some standards, such as JEDEC Memory Standard, 1 Kilobyte (KB) still represents 1024 B. Recently, I've been a victim of this misconception. I was tasked to estimate the performance of a high-speed data transfer protocol. After some measurement I figured out that the protocol is running at 0x1b9234fb4 Bps (7401066420 Bps). Swiftly, I shift the number to the right by 20 places and obtain 0x1b92 MBps (7058 MBps). If you did the math properly on the raw data, the protocol is actually running at 7401 MBps instead of 7058. 𝗧𝗵𝗮𝘁 𝗶𝘀 𝟱% 𝗼𝗳 𝗲𝗿𝗿𝗼𝗿 𝗶𝗻 𝘁𝗵𝗲 𝗺𝗲𝗮𝘀𝘂𝗿𝗲𝗺𝗲𝗻𝘁. Key takeaway: - Always check the standards of any documentation provided - If the standard is not specified, use IEC standard as default. - Be wary of any memory related documentation as it usually adopted JEDEC memory standard
To view or add a comment, sign in
-
From data sprawl to security risks, the reliance on data warehouses for data integration poses several challenges. In this era of big data, as organisations seek to extract greater value from their data, achieving a scalable and adaptable central data view becomes increasingly essential. With data virtualisation, a virtual abstraction layer connects directly to each data source, presenting a unified view of the data that business users and applications can access and query in real time, regardless of the protocol and format required. Swipe left to discover why data virtualisation is emerging as a solution that helps organisations become more data-centric and gain better insights to boost efficiency and profitability! #datavirtualisation #datawarehouse #dataintegration #datainfrastructure
To view or add a comment, sign in
More from this author
-
Grid Modernization with Artificial Intelligence
Dr. David A. Bishop 1mo -
Introducing Metagility: How Innovative Organizations Are Leveraging Agility to Become #1 In Today's Toughest Markets
Dr. David A. Bishop 6y -
Using IEC Standards for Big Data sense-making in Smart Grid and AMI
Dr. David A. Bishop 9y