The last post was a bit of a brain dump to make sure I didn't forget a few lessons I learned, in part because I knew I was quitting the job that involved doing ML type things. While I was working there of course I learned a lot and (I think) acquitted myself pretty well, but language processing and machine learning are not really what I spent 4 years doing for my PhD. Python is a programming language that I picked up to make my work in grammatical description and syntax easier, and while I find ML (and programming) pretty interesting, my main interest lies in understanding how languages work through comparison, with the ultimate goal of reconstructing linguistic structures and (hopefully) prehistory.
A year and a half ago or so I started working on a grant proposal for that exact thing with some researchers at the University of Zurich. This is a relatively young department that is doing some really cool research in typology, processing, and language acquisition from a corpus-based perspective on multiple languages (both Indo-European and non-IE families/phyla). At the same time historical linguistics is a huge focus in the department, as is modeling language change. This is super exciting because I take the perspective that language is spoken by individuals in communities who acquire language from their forbears (history), use it as a tool for communication (processing), which gives rise to statistical tendencies that all languages share (typology). Since it is individuals using language, this is done in an idiosyncratic way, but since language is learned and guided by principles of processing, the only way to get at both the commonality and the uniqueness of language is by investigating actual language corpora (recordings, transcriptions, etc). Of course the story of how languages change is much more complex and involves many more factors, but be that as it may, this is a great place to be.
Picture: Lake Zurich from the hill above the university
So, long story short, we found out last October that the grant had been funded, and the family and I started making plans to move to Zurich. More on that later, perhaps. With this project, our goal at the moment is to build a database of Austroasiatic language corpora that we can then investigate for all sorts of interesting phenomena, but focusing (initially at least) on word order. By comparing word order in multiple languages of the same family we intend to make an effort toward reconstructing the form of the parent languages from which the present-day spoken languages diverged, and also to identify language contact and interaction effects to contribute to discussions about the development of word order patterns cross-linguistically.
I have been quite lax with posting here mainly because I have been working very hard on a difficult problem. My current job involves (among other things) trying to automate a psychological coding system for text. This is pretty similar to research in Sentiment Analysis (SA, see this brief introduction), a task in Natural Language Processing (NLP) which attempts to predict the 'positivity' or 'negativity' of sentences (i.e. classification), such as from Twitter. Companies find this useful for (among other things) getting a broad understanding of how consumers respond to their products and services.
The coding system I am trying to automate is a bit more complex, but I am using similar techniques as in SA. I started initially with SVM and other classifiers using the SciKit-Learn library, and I have now moved on to using Neural Networks. These are essentially Machine Learning models that allow a computer to generalize patterns in data that correspond to particular outputs. Humans do this pretty naturally - we recognize patterns in language, for example, that allow us to parse sounds into words and words into units of meaning that, when combined, help us communicate.
A data-driven approach
The idea with neural networks is that given enough labeled data, the computer's statistical model can capture the patterns that correspond to the labels and predict what the labels should be for data it has never seen before. This is essentially how character recognition (OCR) works - enough data has been fed to the computer that it has 'learned' what an "A" character looks like in a variety of images. But the model can only make good generalizations if it has a lot of data and a good diversity of data.
It is a common issue, especially in the field of Linguistics, that the amount of data available for a particular problem is limited (i.e. description of languages is often based on a 5-hour recorded corpus supplemented with elicitation, psycholinguistic experiments are usually conducted with sample sizes of 5-20 participants, though larger sample sizes are more ideal). This is partly because of how time-consuming data collection can be - combine this with the fact that you might be dealing with multiple languages and the issue is compounded. But even within a single language, particular problems only have limited datasets, and if we want to automate a system that has typically required trained humans (consider that PhD students who describe a language usually train/read for at least a year before fieldwork, after having completed an MA), there might be even less data than usual available for training, particularly data in the correct format.
Addressing problems in the neural network
This lack of data is a significant problem that we have been working to overcome by creating more coded data. In building ML models I have continued working with the existing data to achieve decent results on a small dataset, the hope being that once more coded data is available it can be incorporated and give a better result. Along the way I have come across several important learning points that I thought I would write here as a reference of sorts, to add to other great posts that have been helpful to me.
Final thoughts and further links
These are just a few of the things that I have learned in the past few months while trying to sort out my classification problems. Papers and online forums have been extremely helpful in developing my understanding of the issues involved, and I have benefitted particularly from this blog and this one (among others) and examples of models on the Keras GitHub repository. For a good (though brief) discussion of trying to implement state of the art text classification models, see this post. Ultimately, as one person noted on a forum, developing neural networks is as much an art as a science, requiring experimentation and intuition to figure out how to apply a particular model architecture to solve a particular problem.
One of the concerns that has occupied my mind for that past few years is the question of data accessibility in the field of Linguistics. I am happy to announce that the data that underpins my grammatical description of Pnar is now freely available as a downloadable archive in audio and text form (anonymized where requested by participants). You can find the link to the dataset at the bottom of this post, but in the meantime I'd like to explain my views surrounding data access and give a brief explanation of the tool I've used to make my linguistic data accessible.
Why accessible data?
Those who are familiar with linguistics understand that traditional descriptions of language are often based on recorded, transcribed and translated interviews and stories by speakers of the language. Although some theoretical work may be based on a few utterances or a single example, most linguistic work is based on many actual examples from utterances that real speakers produce.
One issue here is that there is such between- and within-speaker variation in speech that unless the data you use is actually accessible to other linguists, one can easily question the veracity of a particular analysis. In the interest of scientific enquiry, then, it is incumbent on the analyst-linguist to make their actual data accessible to other researchers in at least some form, whether in an archive or in a database. Having the data accessible to multiple researchers may lead to disagreements about analysis (there may be more than one way of analyzing a particular linguistic structure, for example), but ultimately such disagreements are healthy because they expand our knowledge.
This touches on a larger issue in the world of science, that of verifiability and reproducibility of research, which has galvanized the larger scientific community towards Open Science (see this blog post for an explanation, and check out this OSF paper), and in some fields such as Psychology, has actually resulted in a whole journal devoted to "replication studies". These kind of studies are aimed at trying to replicate results and findings of a particular study by following the same procedure as the original researchers. When replication studies uphold a particular result, it makes it more likely that the original study's findings were not the result of a statistical anomaly or falsification of data, which is a very serious problem that can lead to erroneous claims requiring retraction. For more on this visit RetractionWatch.com.
What this means in the case of linguistic data is that the recordings, transcriptions, and translations that underlie a grammatical description or other study, whenever possible, should be made accessible to other linguists. Data sharing can be a touchy issue simply because of a) the ethical concerns of the providers of the data, b) potential cultural taboos, and c) because of the interests of the linguist who initially made and processed the data.
With proper permissions sought and precautions taken, these concerns can be minimized or dealt with appropriately. A linguist needs to (minimally) communicate to participants about how the data will be used, take the time to anonymize recordings and annotations when necessary, and create a license that constrains how the data can be used in the future. Ideally, if you are doing your research correctly, your university's Institutional Review Board will have already helped you to think through these things. There are also some excellent books, papers and chapters that deal (at least somewhat) with this subject, and there are a set of standards for social science research (with human subjects) and specifically for linguistics that researchers should be aware of.
Some reasons linguists don't share data
The final point (C, the interests of the linguist) is really the sticking point for most people. The reality is that many linguists do not want to release data for several reasons:
This brings me back to the earlier ruminations that started this post, namely that data produced by a linguist and which underpins their work ought to be accessible to other scientists and linguists. When I first submitted my PhD at NTU, I took a look at some of the options for data archiving, and I approached the university library (which keeps digital copies of all theses submitted at the university) to see if they could also store my audio and transcription data (over 1GB). About a year ago, they contacted me to let me know that they were developing such a service, something called DataVerse, and wanted to know if they could use my dataset to test it. I was happy to have them do so, and after some tweaking and time, this tool is now available for use.
DataVerse is a database/archive tool developed at Harvard University that allows researchers to store datasets that other researchers can download for use and testing. It supports the Open Science initiative by making data accessible and open. It also solves one of the problems I noted above by creating a unique url identifier and citation for the dataset. You can check out my dataset at its DOI here and download it for research and non-commercial purposes.
As I was thinking about this previously, I realized that what I wanted was not really an archive but a database that would allow me to develop and annotate my data further. Unfortunately DataVerse is not that - it is basically just a storage tool. What is nice is that it provides versioning, so the curator of the dataset can upload and publish changes. I think I may have to create my own database if I want something that will let me explore the data better. But for now, the data is freely accessible for other linguists (even though my analysis isn't perfect), which is a bit of a load off my mind.
In a previous post I discussed some of the benefits I discovered in using LaTeX with LyX as a front-end. Another extremely useful tool to learn how to use is a Bibliography manager. If you are like I was, and often develop a new bibliography for each paper you write, this is something you might want to consider. On the other hand, you might be used to Microsoft applications and already be familiar with bibliography managers (such as Zotero) that integrate pretty well with the MS Office family.
As I started my PhD and began writing more papers, I noticed that many of my citations were the same. Rather than copy-pasting from previous papers and then adjusting the formatting for each submission, I realized it would be much easier to have a centralized location for all the papers I wanted to cite, and have the computer deal with the formatting according to the style sheet I needed to use.
Fortunately, support for this is ‘baked-in’ to LaTeX/LyX. There are various good tools that integrate with BibTeX (as the bibliography manager in LaTeX is called), but the one that I found to be most useful for my purposes is BibDesk. Rather than explain how I do it, I’ll point you to this excellent tutorial for Mac, which describes how to set it up. In the rest of this post I’ll simply give my reasons for using BibDesk.
Each of these reasons alone are worth getting your act together and creating a single bibliography repository. You can likely think of other good reasons, which just means there is no excuse to not do so.
Another issue that I am thinking about is how to make my library of citations/documents available on any computer with internet access via the cloud. This would ensure at the very least that I wouldn't worry as much about losing it if my computer dies (though I'm still going to make backups regularly). A fellow academic and friend of mine has managed to integrate all of his citations and PDFs with Zotero, and make it available on his phone in an app like Google Drive, so that at conferences he can remember a publication or search for one in a conversation and pull up the reference and/or associated document to show people. This is super cool and super useful - I’ll write about it if I can figure out how to pull it off on my own, or maybe I’ll get him to write about it.
One of the first things I did after passing my PhD confirmation exercise (like a qualifying exam in the USA) was to research the best way to write my thesis. As a side note, I use the word 'thesis' to refer to any large written work, including a PhD, while other English speakers might use the word 'dissertation' to refer specifically to the work that a PhD student produces. In any case, the relevant information here is the term "large", since I knew I was going to be writing a lot. I now consider the tools I'm writing about here to be essential for a productive workflow, and so this post continues the theme of an earlier post on linguistic tools.
In researching how to write my thesis, I asked friends and fellow linguists who had written grammatical descriptions. Most of them had used MS Word, and told me horror stories of lost work, un-editable documents due to the sheer size of their files, difficulties formatting and printing the thing, etc.. So that was out of the question for me, at least at the time (2011; I think more recent versions of MS Office may have fixed some of these issues). But one of them mentioned a program called LaTeX (the funny capitalization is actually part of the name), and that it made typesetting and organization a breeze. And it's free! Which is pretty important to students (if not everyone).
So I checked it out, and ended up spending the next few months learning how to set it up on my computer and how to use it (I use MacTeX as the backend). I am fortunate that I have a little background in coding, because LaTeX is essentially a markup language. You write the text of what you want, formatting parts of it by using special combinations of characters and commands (or 'tags') that tell a program how to format them. Then you run a 'compiler' that outputs everything in the correct layout in a PDF. This is pretty brilliant, because it lets you (the writer) worry mostly about the content rather than the format. But learning how to fiddle with the code is rather time-consuming, so if you're not a hardcore programmer (and I still don't really consider myself one of the hardcore types) there is quite a learning curve. Worth it, but steep.
This is where a visual editor like LyX comes in. LyX is, pretty much out of the box, a simple way of interacting with your LaTeX code. It hides most of the code and offers formatting options, similar to MS Word or other word processors. Unlike them, however, you choose the general formatting parameters and let the backend handle the layout. You can also fiddle directly with the code if you need to, or add code to the front of the document for particular use cases, like a PhD cover page, interlinearized glossed text (IGT) examples, and more. Basically anything you need to add has probably been coded or figured out by someone, and if you're a troubleshooter like me you can run a Google search and find forums (and contribute to some yourself) that deal with your particular problem or at least something similar. And the assistance you get can be pretty phenomenal.
LyX does take a bit of configuration, and I might write another post that explains how I set it up for my use case(s). But for now, I’ll just say that using LaTeX/LyX was one of the best decisions I made as a PhD student. It really simplified my writing process and allowed me to do so much more. Rather than spending the final month on formatting my thesis, I was writing and making final changes all the way up to the deadline. I probably wrote more, and re-organized the structure more, in the last month than I had in the previous three. And the text file that contains my 700+ pages of analysis, examples, and appendices is only ~6 MB. Possibly the greatest benefit was that LyX kept track of all my linked example sentences, and formatted them all properly. Once I got it set up this saved me days and weeks of man-hours. The learning curve was totally worth it.
In closing, if you are seriously considering using LaTeX/LyX, there’s lots of good articles about this online. Here’s one, and here’s a discussion on the topic, to get you started.
My next post was intended to be about streamlining a workflow by using various LaTeX authoring tools and a bibliography manager, but last week my Mac died. It is an older computer admittedly (from 2011), but it just had the graphics/motherboard replaced a year and a half ago due to a recall and I had high hopes that it would last at least until the end of the year. Unfortunately that didn’t happen - while at a conference over the weekend it gave up the ghost, leaving me with just a USB drive that had my presentation, and the last-minute changes to a Praat script that I had managed to upload to GitHub the night before.
Fortunately I have been saving up towards a new computer, since I knew this would probably happen. It’s had a good run, but when I got back to Singapore, replacing the mother/logic board turned out not to be worth it - I’d be better off putting the money I would spend fixing it towards a new computer.
What I find remarkable, however, is that I literally lost nothing except some emails that are stored on the email server anyhow, and maybe some intermediate changes to the Praat script. This is partly because I saw that the computer was starting to fail and backed up all my files with Time Machine on an external hard drive, but partly also because my workflow lends itself to backing up.
As per my previous post regarding Git, it is important to have a workflow that encourages you to snapshot changes. This is what the Git “commit” command does - fail to integrate it into your Git workflow at your peril. Another way to do ensure that your files are backed up is to use online cloud storage like Google Drive and Dropbox. Most likely, if you’re an academic you probably use both these services, but maybe your backup system is a bit ad-hoc. In my case, the death of my computer has suggested to me that I need to use multiple cloud storage services.
So here’s what I am currently doing using a temporary user account on my wife’s computer:
The main reason this works for me is that I generally don’t operate with Dropbox and Google Drive on - I only start these services manually when I want to sync with the online server. In some of my online research, it seems that there MAY be issues with running a Git repo or editing files when cloud storage is syncing, so I get around this potential issue by simply doing my offline edits, and then syncing cloud storage manually.
So as part of my workflow:
As I go, I am also discovering more files that need to be in a cloud server to avoid data loss. This is mitigated to some extent by regular Time Machine backups, but I’m beginning to wonder if I also need to invest in expanded cloud storage. The only other drawback of this process is that since I’m backing up in multiple cloud services, the storage takes up twice as much space on my hard drive as it otherwise would. I’m still thinking about how to deal with that issue…
Analyze tone and plot: A Praat script for exploring, analyzing, and visualizing or plotting F0 (pitch) tracks
Along the lines of some of my previous posts on tools for linguistic analysis, I thought I'd post a quick update on a Praat script I've been working on. While not part of the series of posts I've been working on describing my tools and workflow, this particular script is a recent tool that I've developed for analyzing and exploring tonal patterns.
As I noted in my post on Praat scripting, the Praat program gives linguists a relatively easy way to investigate the properties of speech sounds, and scripts help to automate relatively mundane tasks. But regarding tone, while Praat can easily identify and measure tone, there are no comprehensive scripts that help a linguist to instrumentally investigate tones in a systematic way. Let me first explain a bit.
What is 'Tone'?
In this post, I use the word 'tone' to refer specifically to pitch that is used in a language at the word or morpheme level to indicate a difference in meaning from another word (or words) that are otherwise segmentally identical. In linguistics, this is a "suprasegmental" feature, which essentially means that it occurs in parallel with the phonetic segments of speech created by the movement of articulators (tongue, mouth, etc..) of the human vocal apparatus.
Tone has a lot of similarity to pitch as used in music, which is why tones often seem to 'disappear' or be relatively unimportant in songs sung in languages that otherwise have tone (like Mandarin). Another important piece of information to remember is that there are quite a few different kinds of tone languages. The major tone types are 'register' and 'contour' tones - often languages will be categorized as having one or the other, but some have both types. African languages tend to have register tone: this means that pitch height (measured in frequency, often by Hz) is the primary acoustic correlate of tone in these languages. Asian languages tend to have contour tone: this means that across the tone-bearing-unit (TBU) the pitch may rise or fall.
There are also languages for which the primary acoustic correlate of tone (or some tones) is not pitch, but rather voice quality: glottalization (i.e. Burmese), jitter, shimmer, creakiness, or length. Languages for which pitch is not the overall primary acoustic correlate of tone are often called 'register' languages. Register languages are found throughout South-East Asia.
The pitch correlates of tones are relatively easy for Praat to identify, as its instrumentally observable correspondence is the fundamental frequency (F0) of a sound. Praat can extract the fundamental frequency as well as other frequencies (formants) that are important for various vowels. Tone researchers often use a script to extract F0, and then apply various normalization techniques to the extracted data, plotting the results in a word-processing program like Excel or by using a more robust statistical tool like R (see this paper or this post for more details, and James Stanford has done some good work on this subject - see his presentation on Socio-tonetics here).
One issue with this approach, however, is that often pitch tracks for each tone are time-normalized along with every other tone. So for example, in image #1 below of Cantonese tones (from this paper), all pitches are plotted across the same duration. This is fine if pitch is the primary acoustic correlate of all tones, but what if duration/length is an important correlate as well? How does a researcher begin to discover whether that is the case?
Script for analyzing tone
The problem I tried to solve was basically this: I was investigating a language (Biate) for which no research had been done on tone, and I needed a way to quickly visualize the tonal (F0) properties of a large number of items. I also wanted to easily re-adjust the visualization parameters so that I could output a permanent image. Since I couldn't find an existing tool that could do this, and since I wanted it all to be done in a single program (rather than switching between two/several) I created the Praat script that you can find here at my GitHub page under the folder 'Analyze Tone'. If you look closely, you'll notice that this scripts incorporates several parts (or large chunks) of other people's work, and I'm indebted to their pioneering spirit.
There is a good bit of documentation at the page itself, so I'll leave you to get the script and explore, and will give only a brief explanation and example below. I have so far only tested this script on a Mac, so if you have a PC and would like to test it, please do so and send me feedback. Or, if you're a GitHub user, feel free to branch the repo and submit corrections.
The basic function of the script is to automate F0 extraction, normalization, and visualization using Praat in combination with annotated TextGrid/audio file pairs. Segments that are labeled the same in TextGrids belonging to the same audio file are treated the same. You can start out by plotting all tracks, in which case all tones labeled the same will be drawn with the same color. Or you can create normalized pitch traces, in which case a single F0 track will be plotted for all tones with the same label. This script creates audio files on your hard drive as one of its steps, so make sure you have enough free space (the size of each file is usually less than 50Kb, but if you have hundreds, they can add up).
In order to start, simply open an audio file in Praat and annotate it with the help of a TextGrid (if you don't know how, try following this video tutorial). Create a label tier that only contains tone category labels, and add labels to boundaries that surround each tone-bearing-unit. Then save the textgrid file in the same directory as the audio file, and run the script.
The kind of output you can get with this script is shown below. The first picture shows all the tones plotted for a single speaker, and the normalized F0 tracks for each tone are shown in the second picture. Importantly, F0 tracks are only length-normalized for each tone. You will also notice that the second image zooms in closer so that the relative differences between tones can be seen more easily.
Reasons for visualization
These kinds of visualizations are important for several reasons.
If you have labeled tones from multiple speakers, it is also relatively easy to see whether differences between tones are actually salient. In image #2 above, for example, we can see that most of the red tones (Tone 1) cluster with a higher (rising) contour, whereas the blue tones (Tone 2) cluster with a lower (falling) contour. A few blue tones have a rising F0 track, which may be outliers. In the lower image #3, we see that normalizing across 100 or so instances of each tone shows little effect from those rising blue tones, which strongly suggests they were outliers.
We can also see in this normalized plot that there is a clear difference between Tone 1 (red) and Tone 3 (blue), but the marginal Tone 3 (green) was questionable for most speakers that I recorded, and in the plot it is virtually identical to Tone 2. If this is true for the majority of speakers, we can conclude that Tone 3 probably doesn't actually exist in Biate as a tone correlated with pitch.
Of course all of these visualizations need qualification, and further investigation is needed by any researcher who uses this tool - your methods of data collection and analysis are crucial to ensuring that the information you are gaining from your investigation is leading to the correct analysis. But speakers of a tone language will often clearly identify categories and what you need is a tool to make these categories visible to those who may not perceive them.
I'm hoping in future to expand the coverage of this tool in order to assist with visualizing other acoustic correlates of tone, such as glottalization and creakiness. But I'm not sure when I'll get around to it. If you are a Praat script writer and would like to give it a shot, please join the collaboration and make adjustments - I'd be happy to incorporate them. For now, though, enjoy plotting pitch tracks, and let me know if you have any trouble!
Recently, I started to explore using Git to maintain and organize my data. This includes both the primary data I work from as a linguist and the various kinds of data I produce in the form of written material (books, papers, etc..). As a field linguist, I primarily work with text that is transcribed from audio and video recordings. Early on I developed an archiving system to preserve the original files in their raw form as much as possible, but I didn't develop a similar system for maintaining my working files, and ended up with lots of duplicates of files that I had copied various places for various purposes.
In this fifth post on Linguistic tools (the overview of which is here), I plan to describe the way I am trying to overcome this issue and become more organized by adopting a version control system. Mostly, though, I'll just try to explain what Git is and what I use it for. This is by no means a complete tutorial, since other people have done that better, but hopefully it will provide some direction for others who are interested in streamlining and organizing their workflow.
When I started learning to program in Python and was introduced to Git, I saw the benefits for coding, but began to wonder if it was possible/useful to use it for other organizational tasks in relation to other kinds of writing and data structures. There are a few blog posts about using git for paper writing here and here and here, but they are written primarily for coders who also write papers (I'm looking at you, academic comp-sci folks!) and don't translate super well for my use case.
What is Git?
First, what is Git? Git is a version control system, which means it tracks changes in a repository (or 'repo'), basically serving as a series of backups for your work. It also has tools that allow you to make a copy (or 'branch'), work on it separately, and then when you're ready, you can see the changes made to individual files and 'merge' them together into a new version. For a simple tutorial, see here, and here is a great visual walkthrough.
Most explanations of Git use the example of a tree or a river, which do help to understand the process, but for total newbies (and possibly to address the difficulty of finding a good metaphor) I came up with the following metaphor, and find it to be a bit more useful.
The Git Metaphor
Imagine you have a drawing project. You have an empty piece of paper - this is your initial blank Git repository. Now you draw a picture on the piece of paper and decide it's an ok version - this is a snapshot that is your first 'commit'. You then decide you want to make some adjustments to the drawing, but you want to preserve the initial drawing, so you take out another piece of paper, lay it on top of the first, trace out the shape in pencil, and then begin to modify the edges of the drawing or the background by erasing or adding new bits. When you're happy with that you 'commit' it, and then every time you modify it, you repeat the process of first tracing it out on a piece of paper. Each of these pieces of paper stack, so if you want to go back to an earlier drawing, you can, without losing any of the other drawings.
Now let's say your friend wants to work on the same drawing project. You can give him your latest drawing, he can copy it and work on it in the same way. Then when he's done he can give it back to you and you can choose which bits of his version you want to incorporate into your current version. This is super useful for projects with lots of collaborators, especially if you're writing code, which are basically text files and relatively easy to merge based on differences in lines.
[Note that this isn't exactly what the Git system does - it doesn't actually create a NEW version of the repository, like the tracing paper, but instead uses the most recently committed version as the new base/foundation for future modifications.]
The most important thing I've had to keep in mind is to write myself detailed commit messages, and to figure out a good protocol or format for these messages. This is because once you've committed something, you can go back and search through your commits (save states), but this is only easy IF YOU KNOW WHAT THE COMMIT CONTAINS!!! Using the full filename of the committed file, for example, is much better than writing out 'New version of Paper'. Giving more information, such as about what you changed in the file, is even better.
My Use Case
My use case, however (as opposed to collaborating on code with a large team), is for a single repository of data in which I will occasionally share a single paper with collaborators. What I want is a bit more like a backup system (think Apple's Time Machine or the versioning system in Mac OS) but with a lot more control. Git, for example, works on the command line and requires you to provide messages for each commit you make, which also requires you, the user, to be clear about what changes are and why they were made. This (ideally) means less clutter and assists with organization.
In thinking about my needs and doing research, I discovered a drawback of version control systems like Git. If I have a single repository but am working on multiple papers at once (like I often do), guess what happens when I save changes on document A and then checkout a previous version of paper B? Yup, everything reverts to the state it was before, and I 'lose' all the changes I just made to A. I put 'lose' in quotes here because I don't actually lose anything, as long as I had committed it properly, but I'd then have to checkout the commit, copy the file, and then revert... anyway it's annoying.
What I need
So what I REALLY need is a single Git repository with the ability to track individual folders/files separately with their own version histories (like embedding repositories). Git can do this in several ways, none of which are particularly intuitive: submodules, subtrees, subrepo, and simple embedding. The first two are designed for somewhat different use cases than mine, namely using (part of) someone else's code in your own project. They're not really meant to create sub-repositories in your local repo to track separate histories. The third, subrepo, looks quite promising but requires that you run everything from the root of your main repo, and since I have lots of organizational subfolders, typing the names every time just gets tedious. So, I use the strategy of simple embedding.
By simple embedding, I refer to navigating to a subfolder within my main repo that I want to track and running the command 'git init'. This does two things:
This is probably a hack, and not the way Git was intended to be used, but it works for me. The only problem I foresee happening is making too many Git repositories, and not being able to keep track of the multiplicity. So I plan to use this sparingly. For the most part, I'm ok with committing changes across the whole repository, since one of the real benefits of using a VCS is the freedom to throw things away.
Some other great posts on version control systems include this one on the benefits of version control, this one on several different version control systems for writers, this post on the limitations of Github for writers - the last of a 6-part series worth reading if you are debating whether to start on the Git/version control journey. And finally, this one is about one person's suggestions for academic writing and Git.
In any case, Git or another VCS is definitely worth checking out. I also recently discovered that Git repos are supported 'out of the box' by LyX, my go-to LaTeX editor! More on that to come... I also recently discovered SRC (Simple Revision Control), which focuses on tracking revisions on single files. This might actually be what I need. I'm still exploring this, and it might not be necessary since my LaTeX editor supports Git. We'll see.
This is the fourth installment in a series of blog posts where I discuss some of the tools I use for linguistic analysis. In a previous post I described the tools I use for processing and converting video and audio. This is partly important for using the audio with Praat for acoustic analysis, but also because ultimately I want to transcribe the recordings and analyze the grammatical structure (using the transcriptions) while remaining linked to the recordings. Ideally, such a link would allow me to play back the recording from within the program that I use to analyze the grammatical structure of the language, so I'm not constantly having to open the sound files and scrub to the correct part of the WAV.
Fortunately, this IS possibile, and it currently involves using two larger programs, Transcriber and Toolbox, along with a small tool/script that I wrote for conversion (you can read more details about the script HERE). The original workflow, which my supervisor Alec Coupe taught me, depended on another script developed by Andrew Margetts, which was very useful but required internet access - possibly problematic if one was on fieldwork. Then a few years ago the site went down and conversion became a multi-step process involving emails. So I wrote my own script. But I'm getting ahead of myself.
The first step in this process is getting your audio file transcribed. It is possible to do this in Transcriber (thus the name), and the newly repackaged version for Mac works pretty well (as does 1.5.1 on Windows or in a Windows VirtualBox). You can also use another program to play back the audio (i.e. Audacity, which can slow down the audio for you and let you boost the signal, etc) while transcribing in a text editor. Then you copy/paste the text to Transcriber for time-alignment.
I use Transcriber for time-alignment because it's simple, it allows for extremely short time-stamped windows, and it produces text files (basically) as output. There are other tools such as SayMore (Windows only) that also allow you to transcribe and directly output files for use with Toolbox, but in using it I found that SayMore was a bit of a memory hog and wouldn't let me create time-stamped sections shorter than half a second, which could be problematic for short interjections. And it kept freezing, so that it took a really long time to transcribe a short example.
So with Transcriber I simply open the audio file that I want to time-align, copy the block of text that I've transcribed, and then listen to the audio file, follow along in the text, and insert timestamps ('enter' or 'return' key) at the correct points. Once this is done for the whole file (time-consuming, but not as time-consuming as the transcription) I save the file (a '.trs' format).
Once you have the 'TRS' file, you get to use this converter, which you download to your computer. The converter runs as either a Python script or a Windows executable file that you run in the folder that has your 'TRS' file. When you run it for the first time it asks you for the field names in your Toolbox settings, and creates a configuration file in the folder to store the settings. If it doesn't find the settings file the next time, it will ask again. When it has the settings, it outputs a 'TXT' file for each 'TRS' file in the folder.
Once you've got the 'TXT' file, you can open it up to view it. If you already work with Toolbox, you can see that it is already (mostly) in the correct format. If you don't work with Toolbox, try installing it and go through the tutorials with a new project. Then open the Toolbox-created text file and compare it to the one created by my script from the Transcriber file. You should be able to copy/paste the content of the newly-created text file into your Toolbox project text file with no issues. Now when you open your Toolbox project file in Toolbox, you should be able to see the timestamps and play the sound file (and portions of it) from within Toolbox, provided you put the audio file in the correct location.
This may seem a bit cumbersome, begging the question "Why use Toolbox at all?"
A final note (ELAN)
As I end this post, I should note that many linguists use ELAN for transcription and annotation of video/audio. In fact, that the suggestions made by the 'trs2txt' script are for start and end codes labeled ELANBegin and ELANEnd, respectively, and the code for participants is labeled ELANParticipant, which should facilitate importing into ELAN. Unfortunately, I haven't found ELAN to otherwise be particularly useful for my purposes, primarily because it doesn't have the dictionary linking functions that Toolbox has, at least the last time I looked at it. It does have a Toolbox import function, so I have seen people import a Toolbox file, re-link their video, and output an html web page with embedded video for display on a website (thus the code labels in the script). Maybe you could also transcribe in ELAN, export to Toolbox, do your analysis, and import back to ELAN. For now, though, I think I'll stick with the Transcriber-Toolbox workflow.
In this third post about linguistic tools, I'll be discussing software that I use for acoustic analysis. Praat is one of the premier acoustic analysis tools available for computers. While there are probably commercial software products out there that are more powerful and with more bells and whistles, Praat offers some of the best ways to visualize and manipulate sound while being free and cross-platform. While it's not completely intuitive, it is quite easy to explore the sound space of a recording, especially recorded speech, and I ran a workshop on the basics of how to use it, with online materials that you can practice with if you want to learn more. There are also other great tutorials online that you should search for.
One of the best features of Praat is the ability to segment sounds using TextGrids, which are basically text files that identify sections of a sound file using timestamps. The benefit of this is that once you have properly annotated a sound file you can use scripts to automate analyses, which saves a lot of time that would otherwise be spent taking individual measurements. When I first started my PhD I spent a good amount of time learning to write Praat scripts, which turned out to be a continuation of the programming I learned when I was younger (Basic, QBasic) and a worthy introduction to programming languages like Python.
Since this has turned out to be a post that discusses Praat scripting, I'm going to introduce/attach some of the scripts I wrote/use for acoustic analysis, and link to some of the many other places you can find scripts for your particular use case. In my case these scripts are mainly in service of documentation and description of endangered and unwritten languages, but maybe others will find them useful as well.
Automatically measuring sounds:
This script ("dur_f0_f1_f2_f3_intensity.praat") is one that I modified (originally from this script but more recently I based it on this script) to give automatic measurements of segmented sounds in a TextGrid. It is an updated version of the “msr&check…" file that I made available along with the workshop I linked to above. At the time, I had recorded several wordlists in Pnar, and I spent countless hours segmenting the sounds in each word. My thinking was that even if my segmentation wasn't precise, the sheer number of sounds and their tabulation would allow me to run valid quantitative analyses. As it worked out, this was mostly the case, and I was able to target the outliers for closer examination. I also got better at recognizing Pnar sounds from all the time I spent with the words. I have now updated this script to work nicely with the following script, which plots vowels for you in the Praat picture window, which can produce print-publication-friendly images.
Vowel plot for formants:
Another that I wrote/modified from other bits takes a comma-delimited CSV spreadsheet with formant values and plots them (in the standard vowel chart format) as a Praat drawing with an oval marking their standard deviation (“draw_formants_plot_std_dev.praat”). I wrote this primarily to produce a clearer image than the one produced by JPlotFormants for my PhD thesis. Thanks also to the Praat User Group for their help with getting the script right.
I recently modified this script to work nicely with the automatic measurement script above. What this means is that you can segment all your words using TextGrids, run the script above to produce a CSV, and then just run this script to plot characters from that CSV. I implemented a 'Sequential' option for the plot so you can plot one vowel at a time, which means that you can leave all the segmented consonants (and VOT annotations) in the CSV file for later analysis. Or you can remove them, up to you. Just keep in mind that if you do have consonants in the CSV, it WILL try to plot them on the chart unless you choose the Sequential option.
The third script linked here (“tone_analysis.praat”) I recently wrote in order to take continuous measurements of tones without normalization. This is more for exploration of tonal systems on a per-speaker basis, allowing the investigator to identify whether length is potentially a factor in the characteristics of a particular tone. I am planning to modify it to allow for percentage-based analysis (and thus normalization) of tones, which could be used by the investigator to create clearer plots once they identify the characteristics of the individual tones. But I haven’t gotten around to it yet. I'll write another blog post when I do.
As a final note, these scripts are really just the tip of the iceberg when it comes to the kind of analysis you can do in Praat. For more on Praat scripting, check out this great tutorial, Will Styler's excellent blog, the scripts he uses/maintains, these resources at UW and these from UCLA. You can also follow along with Bartlomiej Plichta as he leads you through some scripting lessons in his videos, which are very useful.
I'm a linguist and singer-songwriter. I write about life, travel, language and technology.