Big Data

Get the White Paper

January 27th, 2014

By Arista Networks

BIG DATA BECOMING A COMMON PROBLEM
IDC projects that the digital universe will reach 40 zetabytes (ZB) by 2020, an amount that exceeds previous forecasts by 5 ZBs, resulting in a 50-fold growth from the beginning of 2010. With an ever-increasing amount of this data being unstructured it is changing the fundamental ways in which we manage and extract value from data. The term unstructured data refers to information that either does not have a pre-defined data model and/or does not fit well into relational tables. Generally it is text-heavy, but may contain other data such as dates, numbers, etc. This data comprises what is more commonly known as big data. In the past the ability to process big data was proprietary and expensive, with few people who knew how to deal with it. Mobility, social networking and search data all comes in as unstructured and needs some form of big data analytics to help increase its value. For many this means using big data analytics on the front-end and putting the data once sorted and processed into traditional relational databases on the backend – but without some preprocessing this is not possible and large amounts of relevant information are lost.

Arista is committed to supporting big data clusters in the way they were designed to operate with a non-blocking, deep buffered, high-speed data center network. This coupled with Arista’s EOS, the world’s most advanced network operating system, allows best-in-class native integration with popular big data distributions such as Hadoop.