What is Big Data?


The development in large volume of an organization where information is not structured, has been the basis to collect them and give way to one of the big management of information “Big Data”.

The speed at which information is displayed throughout the world is the main focus of technology that stores information and then be used in statistical data. A palpable example of this issue is the use of cards that store points in supermarkets, they are used to demonstrate what is the boom of purchase that has a particular consumer, that is, with this information you can manage the days that can place offers and what day will be most used to quote something.

The information that is stored in the large servers that are destined, can be managed and managed by a lot of tools recognized worldwide.

The increase in the amount of available data presents opportunities and problems. In general, having more data about customers (and potential customers) should allow companies to better tailor their products and marketing efforts to create the highest level of satisfaction and repeat business. Companies that are able to collect large amounts of data are provided the opportunity to carry out a deeper and richer analysis.

While better research is positive, the enormous information can also cause overload and shock. Organizations must have the ability to handle large volumes of information.
Determining what makes information important becomes a key factor. Organized information, which includes numerical estimates, can be stored and organized effectively. Unstructured information, for example, messages, recordings and content files, may require more complex systems to connect before they become appreciably valuable.

To really understand the ramifications of Big Data, it is necessary to reach the records of the processing history, in particular the Business Intelligent (BI) and the logical figure. The belief system behind Big Data can most likely be followed back to the days before the PC era, when unstructured information was the standard (paper records) and the investigation was in its early stages. Perhaps the main Big Data challenge occurred as the US registry 1880, when data concerning approximately 50 million people were to be gathered.

With the evaluation of 1880, simply the count of individuals was insufficient for the government of the United States to work with specific components, for example, age, sex, occupation, level of instruction and even the “number of crazy individuals in the family unit. ” These data had an inherent incentive to the procedure, although only in the case that it could be counted, organized, damaged and exhibited. New strategies to relate the information with other information collected, for example, occupations of partners with geographical ranges, birth rates with training levels and nations of starting point with ranges of skills.

The statistics of 1880 really threw a lot of information to manage, however, only seriously restricted innovation was accessible to do any of the research. The Big Data issue could not be addressed for the statistics of 1880, so he took over seven years to physically classify and account for the information.

Comments

Popular Posts