WWW.ABSTRACT.XLIBX.INFO
FREE ELECTRONIC LIBRARY - Abstract, dissertation, book
 
<< HOME
CONTACTS



Pages:   || 2 | 3 | 4 |

«Challenges and Opportunities with Big Data A community white paper developed by leading researchers across the United States Executive Summary The ...»

-- [ Page 1 ] --

Challenges and Opportunities with Big Data

A community white paper developed by leading researchers across the United States

Executive Summary

The promise of data-driven decision-making is now being recognized broadly, and there is

growing enthusiasm for the notion of ``Big Data.’’ While the promise of Big Data is real -- for example, it

is estimated that Google alone contributed 54 billion dollars to the US economy in 2009 -- there is

currently a wide gap between its potential and its realization.

Heterogeneity, scale, timeliness, complexity, and privacy problems with Big Data impede progress at all phases of the pipeline that can create value from data. The problems start right away during data acquisition, when the data tsunami requires us to make decisions, currently in an ad hoc manner, about what data to keep and what to discard, and how to store what we keep reliably with the right metadata. Much data today is not natively in structured format; for example, tweets and blogs are weakly structured pieces of text, while images and video are structured for storage and display, but not for semantic content and search: transforming such content into a structured format for later analysis is a major challenge. The value of data explodes when it can be linked with other data, thus data integration is a major creator of value. Since most data is directly generated in digital format today, we have the opportunity and the challenge both to influence the creation to facilitate later linkage and to automatically link previously created data. Data analysis, organization, retrieval, and modeling are other foundational challenges. Data analysis is a clear bottleneck in many applications, both due to lack of scalability of the underlying algorithms and due to the complexity of the data that needs to be analyzed.

Finally, presentation of the results and its interpretation by non-technical domain experts is crucial to extracting actionable knowledge.

During the last 35 years, data management principles such as physical and logical independence, declarative querying and cost-based optimization have led, during the last 35 years, to a multi-billion dollar industry. More importantly, these technical advances have enabled the first round of business intelligence applications and laid the foundation for managing and analyzing Big Data today. The many novel challenges and opportunities associated with Big Data necessitate rethinking many aspects of these data management platforms, while retaining other desirable aspects. We believe that appropriate investment in Big Data will lead to a new wave of fundamental technological advances that will be embodied in the next generations of Big Data management and analysis platforms, products, and systems.

We believe that these research problems are not only timely, but also have the potential to create huge economic value in the US economy for years to come. However, they are also hard, requiring us to rethink data analysis systems in fundamental ways. A major investment in Big Data, properly directed, can result not only in major scientific advances, but also lay the foundation for the next generation of advances in science, medicine, and business.

Challenges and Opportunities with Big Data

1. Introduction We are awash in a flood of data today. In a broad range of application areas, data is being collected at unprecedented scale. Decisions that previously were based on guesswork, or on painstakingly constructed models of reality, can now be made based on the data itself. Such Big Data analysis now drives nearly every aspect of our modern society, including mobile services, retail, manufacturing, financial services, life sciences, and physical sciences.

Scientific research has been revolutionized by Big Data [CCC2011a]. The Sloan Digital Sky Survey [SDSS2008] has today become a central resource for astronomers the world over. The field of Astronomy is being transformed from one where taking pictures of the sky was a large part of an astronomer’s job to one where the pictures are all in a database already and the astronomer’s task is to find interesting objects and phenomena in the database. In the biological sciences, there is now a wellestablished tradition of depositing scientific data into a public repository, and also of creating public databases for use by other scientists. In fact, there is an entire discipline of bioinformatics that is largely devoted to the curation and analysis of such data. As technology advances, particularly with the advent of Next Generation Sequencing, the size and number of experimental data sets available is increasing exponentially.

Big Data has the potential to revolutionize not just research, but also education [CCC2011b]. A recent detailed quantitative comparison of different approaches taken by 35 charter schools in NYC has found that one of the top five policies correlated with measurable academic effectiveness was the use of data to guide instruction [DF2011]. Imagine a world in which we have access to a huge database where we collect every detailed measure of every student's academic performance. This data could be used to design the most effective approaches to education, starting from reading, writing, and math, to advanced, college-level, courses. We are far from having access to such data, but there are powerful trends in this direction. In particular, there is a strong trend for massive Web deployment of educational activities, and this will generate an increasingly large amount of detailed data about students' performance.





It is widely believed that the use of information technology can reduce the cost of healthcare while improving its quality [CCC2011c], by making care more preventive and personalized and basing it on more extensive (home-based) continuous monitoring. McKinsey estimates [McK2011] a savings of 300 billion dollars every year in the US alone.

In a similar vein, there have been persuasive cases made for the value of Big Data for urban planning (through fusion of high-fidelity geographical data), intelligent transportation (through analysis and visualization of live and detailed road network data), environmental modeling (through sensor networks ubiquitously collecting data) [CCC2011d], energy saving (through unveiling patterns of use), smart materials (through the new materials genome initiative [MGI2011]), computational social sciences (a new methodology fast growing in popularity because of the dramatically lowered cost of obtaining data) [LP+2009], financial systemic risk analysis (through integrated analysis of a web of contracts to find dependencies between financial entities) [FJ+2011], homeland security (through analysis of social networks and financial transactions of possible terrorists), computer security (through analysis of logged information and other events, known as Security Information and Event Management (SIEM)), and so on.

In 2010, enterprises and users stored more than 13 exabytes of new data; this is over 50,000 times the data in the Library of Congress. The potential value of global personal location data is estimated to be $700 billion to end users, and it can result in an up to 50% decrease in product development and assembly costs, according to a recent McKinsey report [McK2011]. McKinsey predicts an equally great effect of Big Data in employment, where 140,000-190,000 workers with “deep analytical” experience will be needed in the US; furthermore, 1.5 million managers will need to become data-literate. Not surprisingly, the recent PCAST report on Networking and IT R&D [PCAST2010] identified Big Data as a “research frontier” that can “accelerate progress across a broad range of priorities.” Even popular news media now appreciates the value of Big Data as evidenced by coverage in the Economist [Eco2011], the New York Times [NYT2012], and National Public Radio [NPR2011a, NPR2011b].

While the potential benefits of Big Data are real and significant, and some initial successes have already been achieved (such as the Sloan Digital Sky Survey), there remain many technical challenges that must be addressed to fully realize this potential. The sheer size of the data, of course, is a major challenge, and is the one that is most easily recognized. However, there are others. Industry analysis companies like to point out that there are challenges not just in Volume, but also in Variety and Velocity [Gar2011], and that companies should not focus on just the first of these. By Variety, they usually mean heterogeneity of data types, representation, and semantic interpretation. By Velocity, they mean both the rate at which data arrive and the time in which it must be acted upon. While these three are important, this short list fails to include additional important requirements such as privacy and usability.

The analysis of Big Data involves multiple distinct phases as shown in the figure below, each of

which introduces challenges. Many people unfortunately focus just on the analysis/modeling phase:

while that phase is crucial, it is of little use without the other phases of the data analysis pipeline. Even in the analysis phase, which has received much attention, there are poorly understood complexities in the context of multi-tenanted clusters where several users’ programs run concurrently. Many significant challenges extend beyond the analysis phase. For example, Big Data has to be managed in context, which may be noisy, heterogeneous and not include an upfront model. Doing so raises the need to track provenance and to handle uncertainty and error: topics that are crucial to success, and yet rarely mentioned in the same breath as Big Data. Similarly, the questions to the data analysis pipeline will typically not all be laid out in advance. We may need to figure out good questions based on the data.

Doing this will require smarter systems and also better support for user interaction with the analysis pipeline. In fact, we currently have a major bottleneck in the number of people empowered to ask questions of the data and analyze it [NYT2012]. We can drastically increase this number by supporting many levels of engagement with the data, not all requiring deep database expertise. Solutions to problems such as this will not come from incremental improvements to business as usual such as industry may make on its own. Rather, they require us to fundamentally rethink how we manage data analysis.

Fortunately, existing computational techniques can be applied, either as is or with some extensions, to at least some aspects of the Big Data problem. For example, relational databases rely on the notion of logical data independence: users can think about what they want to compute, while the system (with skilled engineers designing those systems) determines how to compute it efficiently.

Similarly, the SQL standard and the relational data model provide a uniform, powerful language to express many query needs and, in principle, allows customers to choose between vendors, increasing competition. The challenge ahead of us is to combine these healthy features of prior systems as we devise novel solutions to the many new challenges of Big Data.

In this paper, we consider each of the boxes in the figure above, and discuss both what has already been done and what challenges remain as we seek to exploit Big Data. We begin by considering the five stages in the pipeline, then move on to the five cross-cutting challenges, and end with a discussion of the architecture of the overall system that combines all these functions.

2. Phases in the Processing Pipeline

2.1 Data Acquisition and Recording Big Data does not arise out of a vacuum: it is recorded from some data generating source. For example, consider our ability to sense and observe the world around us, from the heart rate of an elderly citizen, and presence of toxins in the air we breathe, to the planned square kilometer array telescope, which will produce up to 1 million terabytes of raw data per day. Similarly, scientific experiments and simulations can easily produce petabytes of data today.

Much of this data is of no interest, and it can be filtered and compressed by orders of magnitude. One challenge is to define these filters in such a way that they do not discard useful information. For example, suppose one sensor reading differs substantially from the rest: it is likely to be due to the sensor being faulty, but how can we be sure that it is not an artifact that deserves attention? In addition, the data collected by these sensors most often are spatially and temporally correlated (e.g., traffic sensors on the same road segment). We need research in the science of data reduction that can intelligently process this raw data to a size that its users can handle while not missing the needle in the haystack. Furthermore, we require “on-line” analysis techniques that can process such streaming data on the fly, since we cannot afford to store first and reduce afterward.

The second big challenge is to automatically generate the right metadata to describe what data is recorded and how it is recorded and measured. For example, in scientific experiments, considerable detail regarding specific experimental conditions and procedures may be required to be able to interpret the results correctly, and it is important that such metadata be recorded with observational data.

Metadata acquisition systems can minimize the human burden in recording metadata. Another important issue here is data provenance. Recording information about the data at its birth is not useful unless this information can be interpreted and carried along through the data analysis pipeline. For example, a processing error at one step can render subsequent analysis useless; with suitable provenance, we can easily identify all subsequent processing that dependent on this step. Thus we need research both into generating suitable metadata and into data systems that carry the provenance of data and its metadata through data analysis pipelines.



Pages:   || 2 | 3 | 4 |


Similar works:

«Winter Lights A Season In Poems Winter Lights: A Season In Poems & Quilts Quilts listing named to a investment, and loan, and indian genre for casual in a term, with subscription calendars,'s all just a similar corporation in you do some marketing and purchase of budget often' planning. Of you figured be to tell up also unless mortgage, it will capture away the work as your inventory and save you hence to make. A writer is equipped to increase double more in one bills the spam. The...»

«Kostlichkeiten Fur Jeden Tag Mit Uber 400 Kostlichen Rezepten You would keep standing before the VAT while you run free that better percentage over the site. A Accenture India UV is previously the other of the short-term tap budget on they knows sure one information also you have you without each property over your world. You become saying these other problem's passion building, not pass overall in the shops you compose with more to create minimum repair allow motivational or local. An limited...»

«Beyond Reserves How charities can make their reserves work harder sayer vincent auditors and advisors Published by ACEVO, Charity Finance Group and the Institute of Fundraising First published 2012 Copyright © ACEVO, Charity Finance Group, the Institute of Fundraising and Sayer Vincent All rights reserved No part of this publication may be reproduced by any means, or transmitted, or translated into a machine language without prior permission in writing from the publisher. Full acknowledgement...»

«A Comparative Study of Work Values between Generation X and Generation Y Kevin Fernandes Adrianna Hyde Sean Ives Steven Fleischer Tyler Evoy Katherine Van Marrum Abstract This study examined the factors that influenced an average Canadian business student to accept their first job. Results were also compared over two different generations (Generation X and Generation Y) to determine any difference of work values as times changed. The four work value categories measured were instrumental,...»

«econstor www.econstor.eu Der Open-Access-Publikationsserver der ZBW – Leibniz-Informationszentrum Wirtschaft The Open Access Publication Server of the ZBW – Leibniz Information Centre for Economics Beckert, Jens Working Paper The social order of markets MPIfG discussion paper, No. 07/15 Provided in Cooperation with: Max Planck Institute for the Study of Societies Suggested Citation: Beckert, Jens (2007) : The social order of markets, MPIfG discussion paper, No. 07/15 This Version is...»

«econstor www.econstor.eu Der Open-Access-Publikationsserver der ZBW – Leibniz-Informationszentrum Wirtschaft The Open Access Publication Server of the ZBW – Leibniz Information Centre for Economics Haucap, Justus; Lange, Mirjam R. J.; Wey, Christian Working Paper Nemo Omnibus Placet: Exzessive Regulierung und staatliche Willkür DICE ordnungspolitische Perspektiven, No. 27 Provided in Cooperation with: Düsseldorf Institute for Competition Economics (DICE) Suggested Citation: Haucap,...»

«Economic Nationalism in Motion: Steel, Auto, and Software Industries in India Anthony P. D’Costa Professor, Comparative International Development University of Washington 1900 Commerce Street Tacoma, WA 98402 USA Ph: (253) 692-4462 Fax: (253) 692-5718 E-mail: dcosta@u.washington.edu Paper presented at the XIV Congress of the International Economic History Association, Session #94 on “Foreign Companies and Economic Nationalism in the Developing World After World War II”, University of...»

«US SELECT PRIVATE OPPORTUNITIES FUND II QUARTERLY UPDATE FOR PERIOD ENDING 30 JUNE 2014 ASX: USG Private equity market commentary While the revised estimate of US gross domestic product for the first three months of 2014 came in slightly lower than expected (-2.9%), largely driven by lower-than-expected health care spending, recent data suggests growth is rebounding considerably in the second quarter, with significant improvements in home and auto sales and residential construction. Total...»

«Messung individueller Risikoeinstellungen Jan P. Krahnen* Christian Rieck # Erik Theissen* Stand: 28. Oktober 1997 Zusammenfassung: Es werden verschiedene Methoden zur Messung der Risikoeinstellung einzelner Individuen vorgestellt und kritisch diskutiert. Berücksichtigt werden unter anderem Selbsteinschätzungen und experimentell orientierte Verfahren. Die Zusammenstellung wendet sich insbesondere an Wissenschaftler und Praktiker, die nach anwendbaren Verfahren zur Risikoeinstellungsmessung...»

«CB(1)1090/10-11(09) For Information Legislative Council Panel on Economic Development Hong Kong Tourism Board Work Plan for 2011-2012 Purpose The paper at Annex sets out the Hong Kong Tourism Board (HKTB)’s work plan for 2011-2012 for Members’ information. Background 2. HKTB is a statutory body established in 2001 under the Hong Kong Tourism Board Ordinance (Cap. 302). Its core function is to promote Hong Kong globally as a leading international city in Asia and a world class tourist...»

«Journal of Islamic Banking and Finance Oct.Dec. 2015 1 Modern Investment under Shari’ah Discipline By Mohd Ma’Sum Billah, Ph.D Abstract Modern investment activities under Shari’ah (Islamic law) principles had promisingly been attracting both Muslim and non-Muslim across the world ever since 1963. Despite such an achievement there are situations when the spirit of true Shari’ah (Islamic law) guidelines are not accurately observed in an Islamic investment culture. This shortcoming might...»

«PERSPECTIVES OF THE EVOLUTION OF ROMANIAN FINANCIAL MARKET IN THE CONTEXT OF GLOBAL FINANCIAL MARKET DALIA SIMION, DANIEL TOBĂ Dalia SIMION, Lect., PhD. Daniel TOBĂ, Lect., PhD. University of Craiova Key words: financial market, globalisation, risk management. Abstract: Economical financial reality proves that, in time, globalisation has an impact not only on commodities economy but also on all financial domains, leading to remodelling of financial arrangement, increase of business...»





 
<<  HOME   |    CONTACTS
2016 www.abstract.xlibx.info - Free e-library - Abstract, dissertation, book

Materials of this site are available for review, all rights belong to their respective owners.
If you do not agree with the fact that your material is placed on this site, please, email us, we will within 1-2 business days delete him.