In this third post about linguistic tools, I'll be discussing software that I use for acoustic analysis. Praat is one of the premier acoustic analysis tools available for computers. While there are probably commercial software products out there that are more powerful and with more bells and whistles, Praat offers some of the best ways to visualize and manipulate sound while being free and cross-platform. While it's not completely intuitive, it is quite easy to explore the sound space of a recording, especially recorded speech, and I ran a workshop on the basics of how to use it, with online materials that you can practice with if you want to learn more. There are also other great tutorials online that you should search for. One of the best features of Praat is the ability to segment sounds using TextGrids, which are basically text files that identify sections of a sound file using timestamps. The benefit of this is that once you have properly annotated a sound file you can use scripts to automate analyses, which saves a lot of time that would otherwise be spent taking individual measurements. When I first started my PhD I spent a good amount of time learning to write Praat scripts, which turned out to be a continuation of the programming I learned when I was younger (Basic, QBasic) and a worthy introduction to programming languages like Python. Since this has turned out to be a post that discusses Praat scripting, I'm going to introduce/attach some of the scripts I wrote/use for acoustic analysis, and link to some of the many other places you can find scripts for your particular use case. In my case these scripts are mainly in service of documentation and description of endangered and unwritten languages, but maybe others will find them useful as well. Automatically measuring sounds: ![]()
This script ("dur_f0_f1_f2_f3_intensity.praat") is one that I modified (originally from this script but more recently I based it on this script) to give automatic measurements of segmented sounds in a TextGrid. It is an updated version of the “msr&check…" file that I made available along with the workshop I linked to above. At the time, I had recorded several wordlists in Pnar, and I spent countless hours segmenting the sounds in each word. My thinking was that even if my segmentation wasn't precise, the sheer number of sounds and their tabulation would allow me to run valid quantitative analyses. As it worked out, this was mostly the case, and I was able to target the outliers for closer examination. I also got better at recognizing Pnar sounds from all the time I spent with the words. I have now updated this script to work nicely with the following script, which plots vowels for you in the Praat picture window, which can produce print-publication-friendly images. Vowel plot for formants: ![]()
Another that I wrote/modified from other bits takes a comma-delimited CSV spreadsheet with formant values and plots them (in the standard vowel chart format) as a Praat drawing with an oval marking their standard deviation (“draw_formants_plot_std_dev.praat”). I wrote this primarily to produce a clearer image than the one produced by JPlotFormants for my PhD thesis. Thanks also to the Praat User Group for their help with getting the script right. I recently modified this script to work nicely with the automatic measurement script above. What this means is that you can segment all your words using TextGrids, run the script above to produce a CSV, and then just run this script to plot characters from that CSV. I implemented a 'Sequential' option for the plot so you can plot one vowel at a time, which means that you can leave all the segmented consonants (and VOT annotations) in the CSV file for later analysis. Or you can remove them, up to you. Just keep in mind that if you do have consonants in the CSV, it WILL try to plot them on the chart unless you choose the Sequential option. Tone Analysis: ![]()
The third script linked here (“tone_analysis.praat”) I recently wrote in order to take continuous measurements of tones without normalization. This is more for exploration of tonal systems on a per-speaker basis, allowing the investigator to identify whether length is potentially a factor in the characteristics of a particular tone. I am planning to modify it to allow for percentage-based analysis (and thus normalization) of tones, which could be used by the investigator to create clearer plots once they identify the characteristics of the individual tones. But I haven’t gotten around to it yet. I'll write another blog post when I do.
As a final note, these scripts are really just the tip of the iceberg when it comes to the kind of analysis you can do in Praat. For more on Praat scripting, check out this great tutorial, Will Styler's excellent blog, the scripts he uses/maintains, these resources at UW and these from UCLA. You can also follow along with Bartlomiej Plichta as he leads you through some scripting lessons in his videos, which are very useful.
2 Comments
When you discuss doing language documentation and description, one of the first things to know is that you have to collect language data. The primary source of language data is people who speak the language you're interested in, which then begs the question of how you record the data. There are some great books and papers on doing linguistic fieldwork of a documentary nature (more than what I've linked to here), but this post is focused more on the tools you use to process your data once it is recorded, as a continuation of my 'Linguistic Tools' post. I'll also plan to write a longer post on recording audio/video in the field, but for now I'll assume that you've recorded it already. I'll just briefly say that I like using a digital SLR like the Canon Rebel along with a unidirectional mic, in conjunction with a digital audio recorder like the Zoom H4N (ideally with a lapel mic of some kind).
Once you have your data recorded, the next step is to copy it to your computer for processing. Often the digital recordings will be rather large and cumbersome, and you may want to split them into smaller files, depending on how many stories/interactions you recorded. I find post-processing is important because it means you can focus on the interaction during the recording and then during processing you make notes of all the files, their content, and other metadata that will help later when you're not in the field and can't remember all the details. In this processing stage you also want to do two very important things:
I use two programs for converting video: Media Converter and MPEG Streamclip. You could use just MPEG Streamclip (which has a Windows version), but on a Mac I find that Media Converter is much simpler/easier for reducing the size of the file, stripping out the audio, or other purposes. MPEG Streamclip is great, though, for combining multiple clips or splitting one clip into several. In each conversion you want to ensure that the video/audio quality is not compromised, depending on what you want to use it for. In my case I am mostly doing acoustic analysis, so I'm more interested in preserving the audio at CD quality (16 bit, 44.1 khz) which is the standard for acoustic analysis and archiving. In any case, since I've backed up the raw files, I can always copy from them if I mess up my working files and need to restore the quality. To process/convert and work with audio I use Audacity - this is primarily for processing audio, not for acoustic analysis. Audacity supports a large range of encodings and formats, and you can select portions of the sound file to do basic processing like boosting the signal, removing noise, etc. These are generally not the best things to do to an audio signal, but they can be useful. In my case, this is particularly for when I'm playing the audio back and need to hear what someone said in the background during a conversation, or do other kinds of manipulations. I can't stress enough the importance of backing up data and copying your data files to a new (staging) folder. This really ensures that you can always rewind the clock and reset, while being confident in exploring the data itself in your working folder. This should become an important part of your workflow so that it is second nature. In some cases we will make mistakes, but understanding the importance of backing up and creating metadata for your backups will help to mitigate perhaps catastrophic events. Happy converting! When I started my PhD program in Linguistics (language documentation and description), I had some experience with linguistic analysis, but not to the degree that I had to learn in order to complete my PhD. I had tuned my ear to be able to hear the sounds of the IPA, and had practice transcribing and learning a range of languages, but I had never analyzed an unwritten language completely by myself. During the course of my PhD I learned much more about how to analyze languages 'from the ground up', so to speak.
Along the way, I discovered that there were some excellent tools that made me much more effective and efficient at the task of documenting and describing an unwritten language. I was fortunate that I already had a good foundation in recording and processing audio from my experiences recording, mixing, and releasing my music, so the fact that the audio data I recorded would form the basis of my analysis didn't phase me. However, there were another whole set of tools that would allow me to investigate the details of the language I planned to work on. Each of these programs is open source or free, though some are developed for Windows and others are developed for MacOS, which might be a problem for some people. Since I grew up with DOS and Windows but then later switched to a Mac, I'm comfortable with both systems. The Apple/Mac laptop build quality was my first choice for travel and portability combined with power. I say 'was' since some of Apple's recent design choices mean I might be switching back to Windows on my next laptop. But for now I run an old Windows version on my Mac via Virtualbox or bundle Windows software in a Wine port so I can run it as a native app in MacOS. I'll plan to describe each of these tools in more detail in future posts, but for now here's A list of the tools I currently use for my linguistic work:
Tools other linguists use, but that I don't use much:
Just a quick blog post to mention that one of the tools I use in language documentation and description, Transcriber, is newly repackaged for use with OS X El Capitan! This is a big deal because previous versions (from 2013) failed to work, then the program was supposedly 'updated' (and didn't work), so I've been using the 2005 Windows version in a virtual box. But I just tested the new release (new as of 4 hours ago) and it works great on my Mac (just have to update the settings to default to UTF-8 for character encoding) and also with my trs2txt converter for Toolbox! Happy transcribing!
|
About meI'm a linguist and singer-songwriter. I write about life, travel, language and technology. Archives
January 2022
Categories
All
prev. blog
|