In a paper recently published in Paris Tech Review, Big Data: Farewell to Cartesian Thinking?, Jean-Pierre Malle argues that "Big Data deviates from 'traditional' scientific knowledge". So radical is this deviation that it entails a "cultural revolution" turning into an "industrial revolution" as dramatized by the breakneck speed of technological innovations. And indeed Cartesian Dualism – among other intellectual and religious influences – did prepare the way for modern scientists to think about the world in abstractions. The Newtonian world-view of physics, for instance, was certainly facilitated by Descartes' philosophical system. Before Descartes and Newton such plainly observable phenomena as the falling of an apple from a tree, the rhythm of the tides in the oceans, or the movements of the planets around the sun were separate and distinct events, "raw data" in Malle's view "usually kept for subsequent processes that are not yet determined". Through Newton's abstract conceptualization, however, apples, oceans, and heavenly bodies all became essentially the same: masses attracted by other masses, all moving according to the same laws of gravitation. Malle does not take issue with this however, "for inductive speed algorithms, it is preferable to transform information from a form 'in extension' (eg conservation of all receipts of Mr. Smith [/or records of phenomena/]) to a form 'in comprehension' (eg Mr. Smith buys a loaf all Mondays and sometimes a cake [/or the law of gravitation/]) more manageable and less bulky", as abstraction hardly qualifies as a "farewell" to Cartesian thinking.
In fact, according to the author, the issue lies in the "Western scientific tradition that derives from Descartes" of deductive thinking. Malle's critical point is that Big Data conceptualization and processing, in contrast, demand induction. The metaphor called to illustrate the case in point is illuminating: it goes along the following lines.
- "Induction, unlike deduction, is a mechanism used by the human brain at almost every moment";
- "[/Induction/] is particularly relevant when analyzing a situation out of its context";
- "For example, to apply deductive logics to the decision of crossing a street, you would need to measure all vehicle speeds, locate them in space and calculate, using a set of equations, which is the right time to cross. Needless to say, the slowness of this technique of analysis would be more of an obstacle, compared with the use your own senses and cognitive abilities… In fact, our brain captures the global scene in a comprehensive situation and processes it by using induction. To do this, it generalizes the principles observed in similar situations involving us – or others – that we have observed (other people crossings streets, at any time, with or without light, wet or dry ground, etc.). Our brain is able to integrate a huge number of parameters in a flash and project the results of its inductions on the current scene.";
- "This is exactly what Big Data processing needs: search instantly for the critical information, process them as a whole without preconditions, reproduce effective mechanisms that have been observed in the past, generate new data that can be used directly in the current situation.";
These are strongly stated epistemological claims which, despite being immediately followed by the inevitable reference to a famous 2008 article by Chris Anderson in Wired to the effect that "the knowledge from Big Data will be produced by 'agnostic' statistics. This lack of ideology is the very condition of their success: in their own way, numbers speak for themselves", are worth investigating further for deeper qualification.
In the analogy offered by the author, the Big Data algorithm – later called an inductive algorithm in the article, plays the role of the human brain confronting a worldly situation in real time. It is a trademark of the current commentary on Big Data that the resurgence of early cybernetics ideas is a source for metaphors. The viewpoint from constructivist empiricism, acknowledged in the article, is then that Big Data repositories and streams constitute the "world" – a Brave New World, possibly – and the Big Data processing constructs the "knowledge", a representation of "reality". In the vein of the cybernetic reasoning lines, what is missing here is proper identification of the closing of the loop, the Wiener-Odobleja notion of feedback. Big Data is not as innocent as that, notwithstanding Anderson. As Hans Jonas remarked, "there is a strong and, it seems, almost irresistible tendency in the human mind to interpret human functions in terms of the artifact that take their place, and artifacts in terms of the replaced human function". Big Data appears to be generally collected, if in unquestionably staggerring volume and blazing speed, to serve a purpose – precisely an analysis in view of, say, a commercial (e-commerce marketing) or a public health (Open Data) objective. Aren't inductive algorithms, singled out as "particularly relevant when analyzing a situation out of its context" and "generat[/ing/] new data that can be used directly in the current situation", reviving the cybernetics attempt to account for purposive behavior without purpose – like behaviorism is an attempt at psychology without the psyche?
Not that, in addition, induction itself isn't fraught with deep questions. Nelson Goodman's Fact, Fiction and Forecast fascinating predicates, grue and bleen, respectively:
- (grue) applying to all things examined before a time t just in case they are green but to other things just in case they are blue, and
- (bleen) applying to all things examined before a time t just in case they are blue but to other things just in case they are green;
are illuminating cases in point. They illustrate the new riddle of induction.
Let us start with the Humian old riddle of induction, because it sits at the core of the author's argument in favor of inductive algorithms. Goodman describes Hume's riddle of induction as the problem of the validity of the predictions we make. Since predictions are about what has yet to be observed and because there is no necessary connection between what has been observed and what will be observed, what is the justification for the predictions we make? And indeed, as Malle rightfully argues, we cannot use deductive logic to infer predictions about future observations based on past observations because there are no valid rules of deductive logic for such inferences. In fact there is a very active research community in mathematical logic, since Haskell Curry, which strives at refining further these statements (works by Martin-Löf, Girard, Prawitz seem relevant here). Hume's answer was that our observations of one kind of event following another kind of event result in our minds forming habits of regularity. Goodman's tackles the old riddle by turning to the problem of justifying a system of rules of deduction, for comparison. For Goodman, the validity of a deductive system is justified by their conformity to good deductive practice. The justification of rules of a deductive system then depends on our (individual, group, sociological) judgments about whether to reject or accept specific deductive inferences. Thus, for Goodman, the problem of induction dissolves into the same problem as justifying a deductive system: the problem of confirmation of generalizations.
The new riddle of induction, for Goodman, rests on our ability to distinguish law-like generalizations, required for making predictions, from non-law-like generalizations. Law-like generalizations are capable of confirmation while non-law-like generalization are not. A good inductive algorithm based on a Big Data collection of observations of emeralds should conclude that they are green and not grue, this at whatever time t it is run. How to make sure that it so does?
This is where pragmatical approaches are called for, and after the investigatory detour, we concur – albeit for different reasons – with some of the calls to action in Malle's article. Leo Breiman in Statistical Modeling: The Two Cultures help us reclaim some of these treacherous inductive grounds. Breiman makes Big Data fit in the modest (though mythical) Black Box: "Statistics starts with data. Think of the data as being generated by a black box in which a vector of input variables x (independent variables) go in one side, and on the other side the response variables y come out. Inside the black box, nature functions to associate the predictor variables with the response variables". (Basic settings indeed, which, however, may evoke ominous undertones of a Schrödinger's cat sealed box. Rest reassured gentle reader, we won't go here into Quantum Mechanics alternate interpretations!) Breiman then points out purpose, two goals for the analysis:
- Prediction: to be able to predict what the responses are going to be to future input variables;
- Information: to extract some information about how nature is associating the response variables to the input variables.
Earlier remarks on the often underplayed importance of purposes served by collecting and analyzing data and on the hard problem of distinguishing Goodman's projectable predicates from non-law-like generalizations should make Breiman's remark clear enough. Breiman then goes on opposing two cultures to approach both goals.
In the data modeling culture, the analysis begins with assuming a stochastic data model for the inside of the black box. The values of the parameters are estimated from the data and the model then used for information and/or prediction. (One then talks of model validation, goodness-of-fit and so forth.) In the algorithmic modeling culture, the analysis considers the inside of the box complex and unknown. Their approach is to find a function f(x), an algorithm that operates on x to predict the responses y. (One also talks of algorithm validation, but in terms such as predictive accuracy.)
Breiman, a well-informed critic of the prevalence of the data-modeling culture in statistics, calls for a new balance in the discipline, leaning towards the algorithmic modeling approach – a research area where he developed momentous contributions. His conclusion – the paper was published in 2001 – offers the following: "The roots of statistics, as in science, lie in working with data and checking theory against data. I hope in this century our field will return to its roots." The current flurry of research and development, both theoretical and practical, on Big Data representation and algorithms testify that this vow was not pledged in vain.