cheap mlb jerseyscheap jerseyscheap nhl jerseyscheap jerseys

Five reasons why data growth will exceed processing speed in the near future – Gigaom

Always updating business technology trends

Get updates that impact your industry from our GigaOm Research Community

Join the community!

A while back, me document what I called, with a lot of buzz, “Jonno’s First Law” – namely that data will always be generated at a greater rate than it can be processed. I believe this principle is fundamental to why we will not see the ultimate vision of artificial intelligence (which I first learned in college 30 years ago) come true, perhaps in decades.

So what is driving the ‘law’? The simplest is Moore’s Law, which says the number of transistors on a chip will double cyclically, is not the only rule. Other principles are economic, context, and consequences of how we choose to create data. While I haven’t done the math, here are some reasons why Jonno’s first law will continue to apply:

  1. Creating data requires less processing than interpreting data. Data is easily generated from even the least intelligent sensors. It is also easy to replicate with minimal processing and / or capacity, as illustrated by a passive RFID tag and ‘smart’ tile.

Consequences: A slight modification in a complex data set can be hard to show as a difference from the original data set, meaning it is more likely to result in two complex datasets.

  1. There is always more data to be collected. Existing business approaches are based on gaining advantage based on the accumulation of more data. Likewise, the human desire for progress means higher image quality and frame rates, larger screens, more detailed information from production systems, etc.

Consequences: The universe cannot be measured in molecules – it is too vast, the data set is too large to grasp without a similar large set of measures. Heisenberg’s uncertainty principle works at both the lowest level and how the data obtained affects human behavior.

  1. The number of data generators is increasing faster than the processor. For example, digital cameras and indeed, mobile phones are being mainly used to create content. 100 million servers exist in the world, compared with 10 billion phones.

Consequences: While processing continues to be commodity, data creation continues to fragment. Cloud computing represents the former and the consumer, the latter.

  1. Current models charge less (or zero) fees for data generation, more for processing. The creation of consumer-oriented data is paid for by advertising, which covers its costs but leaves very little for large-scale information processing.

Consequences: Data generation based on consumers is decentralized due to competitive interests, as many organizations (Facebook, Amazon) are growing their businesses mainly through data accumulation and second interpretation.

  1. Many algorithms have only recently become viable. Much of the math revolved around current AI, machine learning, etc. was well established, but the processing was too expensive to handle it until recently. Therefore, the potential of such algorithms is still being studied.

Consequences: The algorithms we use are based on human understanding, not computer knowledge. This means that our ability to process information is not only constrained by our ability to process but also by our ability to create the right algorithms – which are themselves dependent on the output of the process. that handler.

We still do not understand how to automate the interpretation of data in a smart way, and will do so in the near future. As Note Mark Zuckerberg, “In a way, AI is both closer and farther than we imagined … we’re still figuring out what intelligence really is.” One final theory is that such a leap is needed before processing power can really ‘ahead of the curve’ of data growth and ignore the fact of data growth versus processing. .

Leave a Comment