The study followed 30-day suicide risk attempts among 77,973 patients in nonpsychiatric clinical settings in real time from June 2019 to April 2020.
Three things to know:
- The numbers needed to screen for suicide risk were reasonable for algorithmic screening and required no additional data collection or in-person screening, the report says.
- Risk models can be implemented with accurate performance, but it is not equal in all clinical settings. It will require the model to be recalibrated prior to deploying.
- Model data used to predict suicide risk includes age, race, gender, medication, past healthcare utilization, patient zip code and prior medical conditions.
The next step is pairing the model with low-cost, low-harm preventive strategies to build a preventive model for future suicidality, the report suggests.
More articles on artificial intelligence:
AI could replicate, compound existing disparities in COVID-19 care, study finds
Cleveland Clinic, NFL Players Association launch AI venture for neurological diseases
5 recent studies exploring AI in healthcare