How Are You, Really? AI Can Find Out.

Kristina Arezina
10 min readNov 14, 2020

How are you doing? Oftentimes the answer to this question is— I’m good! Because of this default answer, it can be hard to determine how the person you are chatting with is really feeling. Are they actually happy and healthy?

Doctors often feel the same uncertainly when they are attempting to evaluate whether someone has a mental disorder that is tricky to diagnose, such as depression.

The uncertainty that doctors face is a huge problem as depression is a leading cause of disability worldwide. Although there are known, effective treatments for mental disorders, between 76% and 85% of people in low and middle-income countries receive no treatment for their disorder!

Some barriers to effective care include a lack of resources, lack of trained health-care providers, and social stigma associated with mental illness.

Inaccurate assessment is a significant contributor to people not receiving the help they need. In countries of all income levels, people who are depressed are often not correctly diagnosed. And others who do not have the disorder are too often misdiagnosed and prescribed antidepressants.

Low accuracy of a GPs identification of depression can lead to appropriate treatment not being offered to those who may benefit from treatment.

This is a huge problem as untreated depression may result in increased morbidity due to lack of energy, impaired cognitive focus, and adverse effects on an individual’s social, work or study, and home life.

Why Is The Assessment of Mental Health Disease Inaccurate?

A 2009 meta-analysis of 50,000 patients published in the Lancet, found that general practitioners only correctly identified depression in patients in 47.3% of cases!

Meanwhile, a 2008 study by researchers at the Brown University School of Medicine found that 57% of adults diagnosed with bipolar disorder did not meet diagnostic criteria upon a more comprehensive diagnostic review.

Aside from bipolar disorder and depression, some of the most frequently misdiagnosed mental health disorders include borderline personality disorder, ADHD, PTSD, and anxiety.

So, what causes these high rates of misdiagnosis? One factor could be that “clinicians are inclined to diagnose disorders that they feel more comfortable treating” says Dr. Mark Zimmerman, associate professor at Brown University.

Another major factor that causes psychiatrists to make a misdiagnosis is that oftentimes patients themselves are not telling the doctor their full symptoms. This may be due to the stigma of mental illness, not understanding that the things they are experiencing are symptoms or because they do not feel comfortable telling the doctor how they are feeling.

Also, if one suspected they had depression or other mental illnesses, most hospitals would use the Diagnostic and Statistical Manual of Mental Disorders, fifth edition (DSM-5), and follow a categorical approach based on qualitative questioning to form a diagnosis.

This can be an issue as qualitative questions can be difficult to answer as they require one gauge how they are feeling with no empirical data to back it up.

Clearly, there are a lot of issues with the way mental illness is diagnosed now. Perhaps it is time to look into different tools we can use to aid with diagnoses? Well, turns out Facebook and machine learning (ML) might be able to help.

Using Facebook Posts To Find Depression

Facebook has over 2.2-Billion active users, and they share all kinds of things. Researchers identified that vast amounts of data are being shared and used the posts on Facebook and medical records to detect early symptoms of a mental health problem.

With the Facebook data in hand and using an ML model, researchers could identify the depressed patients with a fair degree of accuracy at AUC=0.69, approximately matching the accuracy of screening surveys benchmarked against medical records.

The researchers reported the words you post in your status updates could also contain hidden information about your mental health. Language predictive of depression includes references to typical symptoms, including sadness, loneliness, hostility, rumination, and increased self-reference.

To use Facebook’s data, researchers used natural language processing (NLP) techniques to make inferences about peoples’ mental states from what they write on Facebook.

How Can NLP Be Used To Help Diagnose Depression

In essence, NLP can be used to detect the sentiment of text via sentiment analysis. This is the process of determining the emotional tone behind a series of words used to gain an understanding of the attitudes, opinions, and emotions expressed within an online mention.

But that is not to say that sentiment analysis is a perfect science at all. The human language is complex. So, teaching a machine to analyze the various grammatical nuances, cultural variations, slang, and misspellings that occur in online mentions is a difficult process.

How Does NLP & Sentiment Analysis Work?

Although NLP & sentiment analysis can be tricky to get right, there are three common steps used by engineers to perform sentiment analysis of a text that I will outline below.

  1. Tokenization

This is the process of getting words to be in a format that the computer can process them in. This is an important step as it allows a neural network to understand the meaning of the words.

Tokenization of text

We can do tokenization by writing code in Python. We can encode words in a sentence such as “I love my dog” into numbers. We can import the tokenizer API’s from Keras in Tensorflow to help us with this step.

Also, the tokenizer is intelligent enough to catch exceptions such as punctuation. For example, “I love sushi.” and “I love sushi!” will have the same tokenization.

Now our words are represented by numbers! But that is not too useful yet. Next, we need to represent sentences by sequences of numbers in the correct order.

Then we will have data ready for processing by a neural network to understand the sentiment of the text!

2. Turning sentences into data

Now we need to create sequences of numbers from the sentences. To do this we will use tools to process them to make them ready for teaching the neural network. Then the sentences containing words will be turned into sequences of numbers.

And lucky for us, there is a text_to_sequence method in Tensorflow which performs most of the work. It creates sequences of tokens representing each sentence so that the data from the text can be ready for training a neural network.

Top: Word value pairs. Bottom: Sequences that text_to_sequence returned

However, we need to think about how the neural network will classify text that has words it has never seen before (which means the words were not in the word index as the training set was limited in vocabulary).

It is important to not lose the length of the sequence for the neural network to work. So, use the out of vocabulary token property and set it to something we would not see in the training set (<OOV> for example).

The tokenizer will create a token for <OOV> and then replace words that it does not recognize with the out of vocabulary token instead. This way we still lose a lot less meaning than if we did not have an out of vocabulary token.

Another question we should think about when turning sentences into data is: How can we train a neural network on sentences of different lengths? Unlike images where they are all usually the same size, sentences can vary drastically in length which can bias the neural network.

A simple solution to this problem is using padding. We can import pad_sequences from Keras in Tensorflow to pad sequences. That way each of the sentences is the same size as they are padded with zeros at the front. We can also choose to make all the sequences smaller by chopping off the words at the start or end.

Top: Initial set of sequences. Bottom: Padded set of sequences.

3. Training a model to recognize sentiment in text

Next, we need to build a classifier to recognize sentiment in text. Sarcasm is tricky, but we can use a dataset of headlines where each headline has been categorized as sarcastic or not. We can then train a classifier on this so that it can tell us if a new piece of text looks like it might be sarcastic.

Now let’s code this! First, we need to change the dataset from JSON to Python for the model to work. For this, we can import JSON at the top and load the sarcasm.json file with it.

Then, we can create a list of sentences, labels, and URLs. And when we iterate through JSON we can load the values into the lists. Then once we have the three lists we can do preprocessing on the text.

Calling tokenizer.fit_on_texts(sentences) on the headline creates tokens for every word in the corpus. Then, we can turn sentences into sequences of tokens and pad them to the same length with this code. If we want to inspect them we can simply print them out.

from tensorflow.keras.preprocessing.text import Tokenizerfrom tensorflow.keras.preprocessing.sequence import pad_sequencestokenizer = Tokenizer(num_words = 100, oov_token="<OOV>")tokenizer.fit_on_texts(sentences)word_index = tokenizer.word_index
sequences = tokenizer.texts_to_sequences(sentences, padding='post')
print(padded[0])print(padded.shape)
Turn sentences into sequences of tokens

Next, we need to split up the data for training and testing from the sequences. The neural network will only see the training data and will never see the test data.

Awesome! We are close to getting the neural network to work. But before we get into that we should understand how our code can get meaning from the numbers that repentant the words in a sentence.

This is where embeddings come in. We can plot the sentiment of the word as coordinates in the x and y plane. By looking at the direction of the vector can start to determine the meaning of the word.

Plotting words in the x and y plane

What if words that are labeled with sentiments, like sarcastic and not sarcastic are plotted in these multiple dimensions? And as the model tries to learn what the direction in these multi-dimensional spaces should look like.

Words that only appear in the sarcastic sentences will have a strong component in the sarcastic direction and others will have one in the not-sarcastic direction. As we load more and more sentences into the network for training these directions can change to reflect that.

And when we have a fully trained network and give it a set of words, it could look up the vectors for these words, sum them up, and thus, give us an idea for the sentiment. This whole concept is known as embedding. Blow is an example of embedding text using Tensorflow.

model = tf.keras.Sequential([tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),tf.keras.layers.GlobalAveragePooling1D(),tf.keras.layers.Dense(24, activation='relu'),tf.keras.layers.Dense(1, activation='sigmoid')])model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])

The top layer in the image above is an embedding where the direction of each word will be learned epoch by epoch. This means that the entire training dataset will be fed into the machine learning algorithm in increments.

After that, we pool with a global average pooling, which means adding up the vectors. Then this is then fed into a neural network.

Cool! But how do we use this to establish sentiment for new sentences? Let’s create some sentences that we want to classify in the sentence array. Then use the tokenizer that was created earlier to convert them into sequences. This way the words will have the same tokens as the training set.

We will then pad those sequences to be the same dimensions as those in the training set and use the same padding type. And we can then predict on the padded set. Below is the code for sentiment analysis with new text.

sentence = ["granny starting to fear spiders in the garden might be real", "game of thrones season finale showing this sunday night"]sequences = tokenizer.texts_to_sequences(sentence)padded = pad_sequences(sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type)print(model.predict(padded))

Sentiment analysis is a powerful tool that can be used to help doctors diagnose mental illness. But at the moment, most applications of it are to increase the profits of businesses. Hopefully, with public awareness of sentiment analysis society, will demand institutions such as healthcare to use this powerful tool for the good of humanity.

TL;DR

  • Doctors often feel uncertain when they are attempting to evaluate whether someone has a mental disorder. This causes many problems.
  • Many things attribute to the high rates of misdiagnosis: clinicians are inclined to diagnose disorders that they feel more comfortable treating, oftentimes patients themselves not telling the doctor their full symptoms, the DMS-5 is a categorical approach based on qualitative questioning to form a diagnosis.
  • Researchers used what people shared on Facebook and medical records to detect early symptoms of a mental health problem.
  • The main steps to NPL are: tokenization, turning sentences into data, and training a model to recognize sentiment in text

--

--