The Fragile Families and Child Wellbeing Study (FFCWS) is a joint research collaboration between Princeton University’s Center for Research on Child Wellbeing and the Columbia Population Research Center. It’s a 15-year-long sociological investigation to shed light on how lives of children born to unmarried couples may turn out to be as they grow. Close to 5000 children born between the year 1998 and 2000 in major US cities were randomly chosen and subsequently followed at the age of one, three, five, nine, and fifteen.
Crowdsource predictions of life outcomes
Fundamentally, the study was designed to address four areas of interests: 1. The conditions and capabilities of unmarried parents, particularly fathers; 2. The nature of relationships between the unmarried parents; 3. How do children born into such families fare, and 4. The impact policies and environmental conditions have on these families and their children. Data ranging from in-home and parents interviews and assessments were collected.
Recently, one of the FFCWS’s principal investigators – Sara McLanahan, Professor of Sociology and Public Affairs at Princeton and her colleagues, challenged hundreds of other researchers; computer scientists; statisticians, and computational sociologists, to forecast six life outcomes of children, their parents and households, including children’s average grade point in school; self-reported level of perseverance, overall level of household poverty and so on.
The challenge allowed fellow scholars to use part of the FFCWS data; working with close to 13,000 data points collected from more than 4000 families over the course of five months. Interestingly, regardless of whether they are using traditional statistical models or cutting edge machine learning algorithms, none of the researchers were able to make an accurate prediction on any of the six life outcomes. Some participants, especially those who study the use of artificial intelligence (AI) in society, were not at all surprised by the results.
Why AI fails to be accurate?
These AI experts believe it all boils down to AI blackbox; if AI is not able to explain why it arrives at a given solution, it will also not be able to address why it fails to predict inaccurately. This is also why a more complicated deep learning model may not necessarily perform better than a simple AI algorithm. Furthermore, researchers also attributed the inaccuracy to the way the research question is being phrased.
For example, the measure of perseverance is subjective. Unless there is a systematic way to account for it and also take into consideration the racial and social aspects that come with it, the algorithm that captures perseverance will never be error-free. In short, having a massive amount of data and a complex machine learning model do not garner accurate predictions. Those who do not have much AI experience need to be more realistic and know that the technology is not a magic potion at the end of the day.
The study was published in the Proceedings of the National Academy of Sciences and it can be read here.