It’s your first ever HIV conference, and there’s a thousand PowerPoint slides and it feels like nothing makes sense. Don’t panic — we’ve all been there.
Epidemiology means the study of epidemics. Many people working in the HIV community sector learned the basics of HIV epidemiology on the job. (There’s always the option to do further training in public health later in your career.) It’s normal to experience confusion and it’s okay to ask your colleagues and the researchers themselves to explain.
This lesson covers the different kinds of research that contribute to Australia’s HIV response. It is not an introduction to epidemiology! When you have the pieces of the puzzle you can start putting the bigger picture together.
We use a lot of line charts to visualise how the epidemic is changing over time. The number of new cases each year can go up or down, or remain stable, and these changes can give us pointers on how we need to target future work – both in prevention and in care and support.
You are most likely to see versions of three graphs. The first is the historical picture of HIV in Australia.
In Australia, the largest number of HIV cases has been seen in gay and bisexual men, and other men who have sex with men (who may not identify as gay or bisexual). Early in the Australian epidemic, we responded quickly to the possibility of transmission involving sex workers and people who use drugs (PWUD), and there have been relatively few cases in these groups. They continue to be identified as priority groups for our national strategy, emphasising the continuing need for well-funded, peer and community-led organisations that work with communities affected by the epidemic.
As we can see in the graph above, the number of new HIV cases among gay, bisexual and other men who have sex with men (GBM) has been declining since 2015. This reflects the emergence and uptake of new HIV prevention approaches, including treatment-as-prevention (TasP, also known as U=U) and pre-exposure prophylaxis (PrEP). It also reflects a significant effort to get more people tested for HIV and sexually transmitted infections (STI) more often.
However, there have been sub-groups of GBM where these changes have happened more slowly or not at all — particularly GBM who were born overseas, and GBM who live in outer suburban and regional areas.
As we see declines in cases among GBM, heterosexual people represent a larger proportion of the total number of new cases. Overseas travel is a factor in some of these cases, particularly new cases in heterosexual men. Cases among people who visit high-prevalence countries (such as countries in sub-Saharan Africa) and people who have partners from those countries have remained relatively steady. Of course, this could change.
It can be challenging to identify trends in groups with smaller numbers of cases, as numbers can go up or down based on random factors. When this is the case, we should rely more on qualitative studies and consultation to understand what might be happening and how we can address it.
Something else to keep in mind is where these data come from. The lines are tracing the number of new diagnoses each year. When a person is diagnosed, the doctor fills out a form and sends it to the health department, and this is known as a notification. New infections only come to our attention when they are diagnosed, so we never have a complete picture of this year’s HIV epidemic — there is always a lag.
Also, we analyse notification data based on the questions asked on the notification form, but there are many more questions we might want to ask. Surveillance research gives us the big picture but it has many gaps. We can use other kinds of research to build a more detailed picture.
When lines change course on graphs, we want to know why – what’s happening? A kind of research we call behavioural surveillance helps us answer that question. It uses surveys where people in key groups can self-report on their HIV risk behaviours and the protective practices they have used to prevent HIV transmission. These can be one-off or regular surveys, sometimes with the same cohort of people each time. The data can be analysed statistically to answer different questions we might have in mind.
The mainstay of behavioural surveillance in Australia is the Gay Community Periodic Survey that runs each year in major cities around the country. Each year the researchers report back to the programs that helped to recruit participants. For a typical example of a report back, watch this video:
You can find an annual report for your city here:
Each different graph in a report or journal article represents a different analysis of the data. Generally, we are looking for differences that are statistically significant. (See note.)
Numeric and statistical findings can seem very authoritative. But there are some important limitations to keep in mind when we interpret findings from behavioural surveillance. Recruitment of survey participants doesn’t reach everyone in groups of interest, so findings may underestimate some people’s experiences and outcomes. Second, self-reported data can be unreliable for many different reasons. The findings are still useful – we interpret them with these limitations in mind.
A key protective practice is the use of prevention methods. The GCPS can be used to measure changes in the use of these methods over time. In the past, all we could monitor was the self-reported use of condoms with casual and regular partners. Now, we look at prevention coverage — the net (overall) proportion of people who are using at least one effective prevention method. These methods include condom use, pre-exposure prophylaxis, and treatment as prevention (undetectable=untransmittable).
Testing for HIV is another protective practice — knowing your status is essential for using PrEP or U=U to protect yourself and your partners.
The ACCESS project monitors the number of tests for HIV, viral hepatitis and other sexually transmitted infections (STIs), as well as test results, at a wide range of clinics. It obtains this data automatically, without collecting any names and addresses for the patients involved.
Here’s why that’s useful. If the number of new diagnoses goes up, we might want to know if that could reflect increased testing, or increased risk-taking in the community. Increased testing could suggest we are finding more undiagnosed cases, rather than seeing an increase in new infections. ACCESS data can give us an idea if people are getting tested more.
The lines on a graph don’t tell us how and why people are changing their behaviour in ways that contribute to HIV transmission rates. We ask some of these questions in behavioural surveillance, but the questions we can ask in the GCPS are quite limited. There is always a trade-off between depth and people’s willingness to take part — surveys like FLUX are very in-depth, but this results in a questionnaire that takes a long time to complete.
When we want in-depth understandings of how and why people take risk and adopt protective practices, we turn to social research. For example, the PrEPARE study used a mix of surveys and in-depth qualitative interviews to understand how people perceive PrEP and the people who use it. Compared to surveys, interviews take longer and sample sizes (the number of participants) are smaller, but they can explore the issues and answers in much more detail.
For example, a study of heterosexual travellers from Western Australia to South-East Asia emphasised the importance of their identity in the HIV risks they might take on. There was a key shift from a ‘tourist’ identity to a more ‘seasoned’ traveller who understood how to manage themselves. This finding is supported by quotes in the journal article:
“The majority of the men in the study spoke about undergoing a transition from being a tourist to being a traveller or expatriate and about changes in their self-perception and in the way they saw their peers. Some men spoke about changes they experienced within themselves as they became more familiar and connected to their host country and often this sense of connection would build over a number of trips. Ronald, who was working in a fly in-fly out context1, using Thailand as a base, provides a good example of the shift from tourist to regular traveller: ‘When you first go there it’s so different and amazing and everything … I was full on into the party side of it, cashed up and … looking for a good time basically. But when that wore off a bit, yeah I wasn’t there so much for that. It was more the people and just the place and just the whole attitude of the place is nice.’ (Ronald, 30–39 age group, Asia, heterosexual)”Source: Brown et al (2012)
Social research using interviews often identifies themes (commonalities or points of interest in the data). It may develop theories (concepts we can use to think about the issues involved). The goal is not to generalise about how a majority of people think and act. It relies on people who use the research to interpret the findings in light of all the other sources of knowledge available.
We use the phrase ‘lived experience’ a lot, particularly when talking about the lives of people with HIV. The phrase means being committed to an in-depth understanding, listening to the voices of the people who are most affected. This can be achieved through surveys like Futures as well as interview studies and lastly, through consultation and community engagement.
The Futures study is a long-running project that invites people living with HIV to participate in a survey every 2-3 years. It recruits participants through organisations that provide services and representation to PLHIV, as well as direct recruitment through social media posts. The findings are often used to understand the changing experience of PLHIV – for instance, to identify new needs that could be addressed through funded projects and services. A lot of research focuses on incidence (new cases) but HIV Futures helps us understand the impact of the HIV epidemic.
Organisations and programs working with affected communities and PLHIV are an important source of knowledge in our own right. As the W3 (‘What Works and Why’) study has reported, peer and community-led programs can pick up issues, and begin sharing knowledge about them, some time before they are identified in surveillance and social research. This is because we can learn from all our encounters with clients and contacts of our services.
When an issue has been identified, organisations will often conduct targeted consultation with other programs in the sector and people in the community. For example, the Double Trouble consultation forum produced a report on ‘what do we know and how do we know it’ about the HIV and sexual health promotion needs of culturally diverse MSM in 2010. The Double Trouble report was one of the earliest occasions when the needs of overseas-born MSM were identified as a gap in our knowledge and our response.
In 2015, quite a long time after the Double Trouble report, the Australian Surveillance Report presented the first breakdown of new HIV notifications by the country of birth of the people diagnosed.
New South Wales had done the same with its own notification data, which showed that overseas-born GBM out-numbered Australian-born GBM in new HIV notifications, despite people born overseas only making up one-third of the Australian population. This was an important signal that overseas-born GBM have important unmet prevention needs. (We’ll discuss what ‘need’ means in another lesson.)
HIV diagnoses among Australian-born GBM began to decline after the introduction of PrEP and TasP, but there was no decline among overseas-born MSM. So there was a gap in the HIV prevention outcome as well.
In 2020, the Kirby Institute published the Gaps Project Report, which is targeted social research drawing on surveillance and behavioural surveillance.
It breaks down the data in three main ways:
Those three factors overlap, so the Gaps Project Report presents many, many comparisons trying to tease out how they interact.
Open the Gaps Report and look at the four graphs on page 26. The graphs on this page compare two variables at the same time, looking at their potential impact on a third variable — the outcome.
Compare the four graphs. Can you answer this question?
Comparing variables in this way makes them easy to understand, but it takes up a lot of space. Researchers often use a technique known as regression analysis to compare the impact of multiple variables at the same time. When looking at the results of a regression analysis, we are interested in comparisons that are ‘statistically significant.’
Significant results are often marked with asterisks or a p number (<0.05, <0.01, <0.001). A result is significant if the measured differences are considered unlikely to occur by chance, given their size and the design of the study.
In this lesson, we have barely scratched the surface of the different kinds of research you might encounter at an HIV conference or seminar. But we have described the different pieces of the puzzle and how they can fit together.
We’ve shown how emerging needs and gaps in health outcomes can be identified using different sources of knowledge, including consultation, surveillance, behavioural surveillance, and social research. And we’ve mentioned each kind of research has its own methods, logic, and limitations. These are important to keep in mind when we interpret and compare findings.
Lastly, we’ve highlighted an important emerging need in the changing epidemiology of the Australian HIV epidemic. In the following lessons we’ll discuss what this might mean for programs and policy responses.