Why is my logo a weird-looking fruit-thing?
It's a pawpaw, the largest edible fruit indigenous to North America. (Gourds are vegetables, people!) It's native to where I grew up (Indiana), and a delicious well-kept secret, even to locals. Imagine a cross between banana custard and a mango—pawpaws are absolutely amazing!
As a fellow Hoosier also aspiring to greatness, I have a strong affinity for the little tree, and on top of that, I'm a big pawpaw proselytizer—I've gotten my labmates hooked on the stuff!
What research am I doing at the moment?
My interests are still as broad as ever, but I've been focusing on how people understand each other when they sometimes talk so differently. This has manifested itself in a bunch of different projects.
How can researchers improve statistical power for skewed data?
Many statistical techniques, like ANOVAs and linear mixed models, make a certain assumption about their data: namely, that the part of the data they can't account for (i.e., the "residuals") are distributed in a bell-shaped, "normal" distribution. But researchers still use these techniques with data where this is known to be false, such as with reading times. How much do violations of these assumptions actually impact our ability to get results using these methods? How can statisticians and researchers improve their statistical power?
(Large-scale statistical modeling)
My researcher here constitutes the first ever large-scale power simulation of reading times (RTs) using natural bootstrapped RT data. In essence, with the University of Rochester's computing cluster I've been running millions upon millions of statistical models in parallel on actual RT data, comparing traditional statistical techniques with ones that power-transform the data. My research here should provide scientists with better knowledge on how to detect effects in skewed data!How does the need for clear communication shape our language?
Languages all change over time, as do the sounds that are used in those languages. Sometimes, two distinct sounds merge to form a single sound. For example, many parts of America now pronounce caught and cot the same. You might think which sounds merge would be completely random, but if the need to understand each other exerts some pressure on how sounds change, some changes are more likely than others. Theoretically, the way we use our languages could actually play a hand in how they evolve!
(Computational modeling)
I'm developing computational models to estimate the impact of phonological mergers on how easy it is to understand each word in the English language. Unlike previous work, I'm doing this in a way that attempts to capture how the acoustic/perceptual similarities of different sounds interact with the structure of the lexicon and how English is used (e.g., incorporating context).How do we get better at understanding accents?
Unfamiliarly accented speech can be hard to process initially, but we can rapidly improve our understanding by merely listening for a few minutes. Is this adaptation merely surface-level changes in comprehension and reaction times, or does our processing become more efficient at a deeper level? Pupil dilation is linked to certain kinds of processing difficulty—I'm interested in using pupillometry to explore how accent adaptation affects processing.
(Pupillometry)How do we optimally handle information to learn an accent?
There is a common conception in linguistics that we immediately compress linguistic information—that as soon as we categorize linguistic elements (sounds/words/meanings), we discard the information that led us to that choice. For example, it's a common occurence to remember the gist of a sentence, but not remember the exact words or how those words sounded. But recent studies have found that information that comes after we've supposedly categorized linguistic elements can still influence them. This suggests that we actually maintain some lower-level information about these elements instead of discarding it as quickly as possible.
(Over-the-web behavioral experiments)
I'm using my knowledge of accent adaptation to find out what sort of information listeners maintain and how long they can maintain it, while also avoiding constraints in previous research. Conducting experiments over the web with Amazon's Mechanical Turk, I have demonstrated that subtitles improve adaptation to foreign accents. By adjusting the delay between speech and subtitles and observing the benefit, I have been able to investigate perceptual maintenance.
What else am I interested in non-academically?
I'm interested in pretty much everything, but right now I'm really pumped about applying the computational and statistical skills I've developed to things I care about personally. It's fun and way easier than most people think. I really enjoy reading webcomics, but I hate being in limbo about whether content creators are going to keep updating after their work, so I'm in the process of making a Bayesian model that will be able to estimate how likely it will be before they update again. In the meantime, I've also looked at political finance and other fun areas!
How else can you connect with me?
Why did I think a Q&A format was a good idea for an 'About Me' page?
Ya got me. My landing page was already descriptive enough for most introductions—I really just made this page
to explain what the pawpaw logo was.