Suggested readings for Week 6

Here are some suggested readings for Week 6.  Remember that I do not distribute my lecture notes.  Note also that you are responsible for all of the material on which I lecture.  These readings are not required, but they are intended to cover everything that I talk about in our lectures (modulo the caution in the preceding sentence).  All of them are available for free on line.

Advertisements

Suggested readings for Week 5

Here are some suggested readings for Week 5.  Remember that I do not distribute my lecture notes.  Note also that you are responsible for all of the material on which I lecture.  These readings are not required, but they are intended to cover everything that I talk about in our lectures (modulo the caution in the preceding sentence).  All of them are available for free on line except for the books (although the Good and Hardin book is available for free, as well).  All of them should be available in an academic library.  Feel free to contact me if you have trouble finding a copy of either.

Suggested readings for Weeks 3 and 4

Here are some suggested readings for Weeks 3 and 4 of class.  Remember that I do not distribute my lecture notes.  Note also that you are responsible for all of the material on which I lecture.  These readings are not required, but they are intended to cover everything that I talk about in our lectures (modulo the caution in the previous sentence).  All of them are available for free on line except for the book by Jackson and Moulinier.  It is truly excellent, and well worth the cost.

General overviews of these weeks’ topics: tasks and tools in natural language processing

Distributions in/of linguistic data

Machine learning (for natural language processing or anything else)

Shared tasks in natural language processing

Natural language processing and social media

 

 

 

How fast does Kevin speak?

I’d like your opinion about my rate of speech during lectures.  Too fast?  Too slow?  Just right?  Let me know.

Suggested readings for Weeks 2 and 3

Here are some suggested readings for Weeks 2 and 3 of class.  Remember that I do not distribute my lecture notes.  Note also that you are responsible for all of the material on which I lecture, regardless of whether or not it is covered in these readings.  These readings are not required, but they are intended to cover everything that I talk about in our lectures (modulo the caution in the previous sentence).  All of them are available for free on line except for the book by Emily Bender, and you may be able to get that one for free, too, through your university.

Examples of ambiguity

Links to a lot of examples of ambiguity:

Syntax

Anaphoric reference ambiguity

Lexical ambiguity

Phonological ambiguity

Suggested readings for Week 1

Here are some suggested readings for Week 1 of class.  Remember that I do not distribute my lecture notes.  Note also that you are responsible for all of the material on which I lecture, regardless of whether or not it is covered in these readings.  These readings are not required, but they are intended to cover everything that I talk about in our lectures (modulo the caution in the previous sentence).  All of them are available for free on line except for the book by Jackson and Moulinier.  It is truly excellent, and well worth the cost.

Homework for week 1

This is the homework for the first week of class.  It is considerably less complicated than what we talked about during the lecture.  For planning purposes, note that it took me about three hours to do this homework, including figuring out the cause of a very stupid bug in a for-loop.

We’re going to look here at the relationships between various variables that are used in the evaluation of natural language processing.  Send me your answers in a single PDF by 17h00 on Monday the 23rd of January.

1. Find a sample of 80-100 words in a language of your choice.  I’ve put some English-language data here and some French-language data here, but you’re free to use any language you like.  Find a friend who speaks the language in question at least as well as you do.  Both of you tag the part of speech of each word.  Then calculate the agreement between the two of you, and explain to me two sources of disagreement between the two of you.  For this question, you will turn in:

  1. The set of tags that you used
  2. The text that you tagged
  3. The tags that you assigned
  4. The tags that your friend assigned
  5. An explanation of two sources of disagreement between the two of you
  6. The calculated agreement.  For this, you have two choices:
  • Do it by hand.  In this case, scan the paper with your calculations and add that to your PDF…
  • …or, write a program to calculate those numbers, and send me your code and your output.  Again, this should be in your PDF.  Your code can be a program in R, Python, or the programming language of your choice, or even an Excel spreadsheet.

2. Given a set of correct answers and a set of answers from your program, determine the true positives, true negatives, false positives, and false negatives.

Suppose that we have a system that classifies tweets as expressing positive opinions about Quisp cereal.  The data is in this file.  means that a tweet does express a positive opinion about Quisp cereal, and means that it does not.  (This could mean that it expresses a negative opinion about Quisp cereal, or a neutral opinion about Quisp cereal, or doesn’t even mention Quisp cereal.  All we know is that it doesn’t express a positive opinion about Quisp cereal.)  The column labelled gold.standard specifies the correct answer.  The column labelled system.output is what our system thinks the answer is.  Note that since we have a binary classification (either yes or no) and a defined set of examples with no boundary issues, we can determine the number of true negatives, which isn’t always the case in language processing.

For this question, submit the counts of true positives, true negatives, false positives, and false negatives.  So, your answer should look something like this:

  • true positives: 613
  • true negatives: 1024
  • false positives: 1789
  • false negatives: 1871

 

3. With the numbers from your answer to Question 1, calculate the precision, recall, and F-measure.  You have the same two options as in Question 1.

4. Now calculate the accuracy.  You have the same two options as in Question 1.

5. One of the problems with accuracy is that it can be a terrible over-estimate of the aspects of system performance that you care about. This is especially true in the situation where what you care about the most is identifying the positive situations, and even more so, when the positive situations are rare.

To see how this works, suppose that in our data set, we have four cases of phone calls to the police emergency number.  Our job is to build a program that correctly classifies phone calls as an emergency situation when they are, in fact an emergency.  Of the four true emergencies, our system says that it is really an emergency for only three of those. Also suppose that if a situation is not an emergency, the system always says–correctly–that it is not an emergency. Calculate the accuracy as the number of true negatives goes up–which means that the positives became rarer and rarer–from 0 true negatives to 100 true negatives.  Graph this with the number of true negatives on the axis and the accuracy on the axis.  As always, make the range for accuracy on the axis be from 0 to 1.0.

To clarify: your first data point will be 3 true positives, one false negative, and no true negatives or false positives.  The next data point will be 3 true positives, one false negative, one true negative, and no false positives.  Continue until you have the data point for 3 true positives, 100 true negatives, one false negative, and no false positives.

6. Now let’s see how F-measure is affected by the rarity of the positive cases.  We’ll model the same situation: the true negatives go up and up, while the number of correctly and incorrectly labeled positives (i.e., true positives and false negatives) stays the same.  Plot the true negatives on the axis and the F-measure on the axis.  As always, make the range for F-measure on the axis be from 0 to 1.0.

7. What is the bug in this line of code?

       f.measure = 2 * precision * recall

8. In what situation will your calculation of accuracy always cause your program to crash unless you check for the relevant input and/or catch the resulting exception?