Stage 3 - CONTENT Analysis

MAXQDA describes itself with the tag line "The Art of Data Analysis." These words capture the heart of the Four Stages of Research. Qualitative research is not just running a series of statistical tests and procedures. It can involve statistics, but it is so much more. It is reading, coding, taking notes (memos), looking for patterns and themes, and then engaging in an analysis of cases.

Stage 3 begins by doing a first read-through of a sample of documents, which serves to refine the code-book, and then finalize the list of document variables. The goal here is to initially make sure that the code system and variables capture the elements necessary to help answer the research questions. In addition to reading and coding, you can use MAXQDA’s auto-code feature to select some of the key themes.

For example, when I read the first few documents in the dog sniff study, it became clear that some courts took the testimony of expert witnesses very seriously, so I used the auto-code feature to search for ‘expert witness’, coding any usage of that word, within a paragraph afterwards. I repeated this for several other concepts I knew I needed to look for, like field performance, and handler cueing of dog behavior.

My general approach is to read through all the cases once, while coding them, and create either memos tied to documents, or free memos to take notes as I progress through the process.

The Deep Dive into the Data: Analysis

Once I have refined the codebook and then coded every document (sometimes going back), I am ready for the deep dive into the data. Using the analytical tools built into MAXQDA to identify patterns and themes and to begin to make sense of the over-all dataset. I begin with the code frequencies to determine the overall breakdown of codes in my data. How many cases involved particular concepts? The Code Frequency page explains that process.

My next step is to further refine that. The MAXQDA Stats package includes the ability to run both frequencies and descriptive statistics. For many of my projects, where the data is largely nominal, descriptives are less valuable, but the tools are there.

I also go back to the code-book and identify the core concepts I was trying to examine. Once coding is done, I select individual codes, and transform codes into variables.

I then use MAXQDA’s analytical tools to look first at the descriptive statistics of what I coded. This is often an emergent process, and it certainly differs from project to project, but I try to look at the data in multiple ways, and look for unexpected patterns, and then take advantage of more of the advanced tools.

For example, in this project, one of the issues I was exploring was the role of a dog’s “field performance” as opposed to its formal training. I wanted to understand how lower courts treated a dog’s field performance records (e.g., how frequently the dog alerted, but no contraband was found) in light of the policy established in Harris.

Numerous Options

The way a researcher uses MAXQDA is inherently a product of individual choices. The software includes numerous tools that can be used, and it isn't as easy as just saying, do A, B, C, and D in this order. Much will depend on the nature of the project. Some of that will also be tied to how the code book was created. For example, when I built a code book to explore the judicial impact of the Florida v. Harris dog sniff case, I organized codes into categories that I believed would be important. The literature review on dog sniffs suggested that there were several key issues that repeat in the literature, relating to a dog's reliability. I also knew that American courts rely on several legal theories in deciding most Fourth Amendment cases. Finally, I was interested in the nature of any criticisms of the Harris decision and dog sniffs in general that judges would make in their opinions. These provided three primary thematic approaches to organize the research: legal theories, theories of dog sniffs, and judicial criticisms. Additional information was coded about the location and nature of the stop that resulted in the dog sniff, and characteristics of the defendant.

Examining the data from a content analysis can be made easier by compartmentalizing it into smaller pieces. Many of the examples I use in the tutorials do this: by only considering specific groupings of codes, and documents. It makes the process more manageable than trying to examine 58 documents at once. Or 70 codes.

What follows is a graphical way of thinking about the process and tools. It is not intended to be linear. You do not have to follow 1, 2, 3, 4 and can jump from tool to tool, but the model provides you with a to think about the emergent properties of this type of research.

Further refining the process - the emergent and non-linear ways to examine data

Individual tutorials