When Katelin Cruz was released from her most recent mental hospitalization in March, she was faced with a familiar range of emotions. On the one hand, she was glad to leave the ward, where the nurses would remove her shoelaces and occasionally accompany her into the shower to ensure that she would not hurt herself. On the other hand, she was pleased to leave the hospital.
She said in an interview that her life on the outside was as chaotic as it had ever been, including the fact that she had a pile of unpaid debts and no permanent place to live. It was simple to find oneself thinking suicide ideas again. According to one research, the weeks after release from a mental hospital are a famously tough period for fragile patients. During this time, the suicide rate is around 15 times the rate seen in the national average.
This time, however, Ms. Cruz, who is 29 years old, left the hospital as part of a massive research project that is attempting to use advances in artificial intelligence to do something that has eluded psychiatrists for centuries: to predict who is likely to attempt suicide and when that person is likely to attempt it, and then to intervene. Ms. Cruz’s participation in this project is a part of a larger effort that is attempting to use advances in artificial intelligence to do something that has eluded
She had a Fitbit attached to her wrist, and it was set to record both her sleep and her movement during the day. An application on her smartphone was gathering information about her emotions, her movements, and her interactions with other people. On the twelfth floor of the William James Building, which is where the psychology department of Harvard University is located, a group of researchers were receiving information from each of the devices in a constant stream.
Machine learning, which makes use of computer algorithms to better anticipate human behaviour, is one of the few emerging fields in the field of mental health that has generated as much interest as it has. At the same time, there is a growing interest in biosensors that can monitor a person’s state of mind in real time by taking into account their choice of music, their postings on social media, their facial expressions, and the tone of their voices.
Matthew K. Nock, a psychologist at Harvard and one of the most prominent suicide researchers in the country, has the goal of combining these various technologies into some kind of early-warning system that could be implemented when a patient who is considered to be at risk is discharged from the hospital.
He provides the following scenario as an illustration of how it may function: The patient’s sleep is reportedly being disrupted, she is indicating that she is in a depressed mood on the questionnaires, and the GPS reveals that she is not leaving the home. However, the accelerometer on her phone reveals that she is quite active, which may be a sign that she is feeling agitated. The patient has been marked by the algorithm. A ping may be heard coming from the dashboard. And at the exact appropriate moment, a clinician gets in touch with the patient by sending a message or giving them a call.
There are a lot of reasons to be sceptical about the possibility of an algorithm ever reaching this degree of precision. Because suicide is such an uncommon occurrence, even among those who are at the greatest risk for it, any attempt to forecast it is sure to result in false positives, which forces treatments on individuals who may not really need them. False negative tests might put the practitioners in a position of legal liability.
It is almost hard to watch huge numbers of individuals who commit suicide, which is a need for algorithms, and these algorithms require detailed, long-term data from a big number of people. Last but not least, the collection of the data necessary for this sort of surveillance raises concerns about an invasion of privacy for some of the most defenceless members of society.