Final Project and Word2Vec
I’ve spent the past 4 weeks focusing almost entirely on theory. This week, I’m using Chollet’s Deep Learning with Python and Géron’s Hands on Machine Learning with Scikit-Learn and Tensorflow to ramp up my skill at Tensorflow. I’ll be doing exercises provided in the books or of my own design over the next month. I’m excited to start building things.
Narrowing Down Final Project Topics
I’ve been working with Igor to come up with a final project. We’d narrowed choices down to around three topics/projects a couple weeks ago.
One potential project was a model that could predict the outcome of a fight given some data about a future match, e.g. the opponents, the city where the fight is being held, etc.
I would train the model using the sort of information I like to know when trying to guess the outcome of a fight, including:
- age
- sex
- height
- reach
- weight class
- stance (i.e. southpaw or orthodox)
- current team/gym
- past teams/gyms
- current and former training partners
- martial arts background (e.g. boxing, wrestling, karate)
- takedown defense
- striking defense
- successful takedowns
- significant strikes landed/minute
- significant strike accuracy
- opponents whom they defeated/to whom they lost
- the rounds in which they won/lost
- the manner of the wins/losses (e.g. submission, TKO)
- title fights won
- hadouken ability
I originally wanted to do it just as a toy project (it’s not very “serious” compared to my other options and is really just a way to let me geek out about MMA), but as we talked about it, it seemed more and more like it could be a solid final project. I could get data by scraping Wikipedia as well as the UFC and Bellator sites, but I figured that would involve a lot of cleaning and I didn’t want to spend a bunch of time doing that. Having said that, Chris Muir built a scraper in R. It’s doable, but…meh. Again, still pretty interested in doing it just for learning.
Another option was doing something around medical applications of ML. There were two specific projects I thought would be really interesting. One project involved training a model to listen to doctor-patient interactions and transcribe the relevant information into the correct part of the patient’s electronic medical record. Things like blood pressure, prognosis, allergies, etc. would be extracted from conversation. Another project in this space would explore the use of ML in vaccine development. I’m interested in vaccines mostly because it seems like they’d be especially difficult to run trials for given that you can’t just…give patients a vaccine, expose them to a disease, and hope they don’t die.
Hippocratic oath and all that.
There’s some concern as to whether or not I’d be able to get the data I need, so I decided not to go down this route.
The third option, and the one I’ll be going with, is a project around training a model to be “curious” and to explore its environment by interacting with it.
A good example of this is can be found here
Baxter learns by correlating its interactions with objects on the table to outcomes. After about 400 hours of training, it gets pretty good at manipulating objects.
Philosophize This! Episode on the Meaning of Words
The most recent episode of Philosophize This!, titled “Derrida and Words”, is great (you could say that about almost every episode of Philsophize This). The episode discusses how the meaning of a word isn’t this isolated thing, but is instead a description of the word’s relation to other words. When you want to define a term, you inevitably use…other words. The definition presupposes some shared knowledge. Words sit in a web and stay attached to the language only via their connection to other words. It’s assumed, when you are providing a definition to someone, that they will understand some subset of this web. Of course, their understanding of words is only as a relation to…you get where this is going.
If I wanted to define the word “cup.” I might say “a cup is a container out of which you drink liquids.” You would need to know what a liquid is and to explain that you might talk about the four (common) states of matter - solid, liquid, gas, and plasma. And before even talking about the differences between the states, you’d first have to explain what you mean by matter and etc. etc. It’s like you’re stuck answering the infinite whys of a four year-old.
I bring all this up because a) I’ll be diving into natural language processing in the next few weeks and b) this brings to mind Word2vec. Word2vec is a group of models used to map text to vectors. An interesting thing about the mappings is that words that are similar (e.g. words with similar meanings or that are often used in the same context) have similar mappings; the vectors that represent them point in roughly the same direction, i.e. have high cosine similarity.
Photo by Nicolas Picard on Unsplash