Ecological Momentary Assessment (EMA)

Ecological momentary assessment (EMA) is a research approach that gathers repeated, real-time data on participants’ experiences and behaviors in their natural environments.

This method, also known as experience sampling method (ESM), ambulatory assessment, or real-time data capture, aims to minimize recall bias and capture the dynamic fluctuations in thoughts, feelings, and actions as they unfold in daily life.

EMA typically involves prompting individuals to answer brief surveys or record specific events throughout the day using electronic devices or paper diaries.

This real-time data collection minimizes recall bias and offers a more accurate representation of an individual’s experience.

The repeated assessments collected in experience sampling studies allow researchers to study microprocesses that unfold over time, such as the relationship between stress and mood or the factors that trigger smoking relapse.

This makes EMA a valuable tool for researchers who want to study how people behave and feel in their natural environments.

Here are some key features of ecological momentary assessment:

  • Real-time assessment: Experience sampling involves asking participants to report on their experiences as they are happening, or shortly thereafter. This is typically done using electronic devices such as smartphones, but can also be done using paper diaries.
  • Repeated assessments: Experience sampling studies typically involve asking participants to complete multiple assessments throughout the day, over a period of several days or weeks. This allows researchers to track changes in participants’ experiences over time.
  • Focus on subjective experience: Experience sampling is often used to study subjective experiences such as moods, emotions, and thoughts. However, it can also be used to study objective behaviors such as smoking, eating, or social interaction.

How Experience Sampling Works

Participants are provided with a device.

Traditionally, EMA studies relied on preprogrammed digital wristwatches and paper assessment forms. Wristwatches could be pre-programmed to emit beeps at random or fixed intervals throughout the day, signaling participants to record their experiences.

Currently, smartphones are the dominant tool for both signaling and data collection in ESM studies.

Not all participants have equal access to or comfort with technology. Researchers need to consider the accessibility of mobile interfaces for participants with visual or hearing impairments, varying levels of technological literacy, and preferences for different input methods.

Consider the specific characteristics and needs of the target population when selecting devices and designing survey interfaces.

Sampling design.

EMA studies utilize specific sampling designs to determine when and how often participants are prompted to provide data.

Two primary sampling designs are commonly employed:

  • Time-based sampling: Participants receive prompts at predetermined times throughout the day. These times can be fixed intervals, such as every hour, or randomized within predefined time blocks. For example, a study might instruct participants to complete an assessment every 90 minutes between 7:30 a.m. and 10:30 p.m. for six consecutive days.
  • Event-based sampling: Participants are prompted to complete assessments whenever a specific event of interest occurs. This could include events like smoking a cigarette, having a social interaction, experiencing a specific symptom, or engaging in a particular activity.

Questionnaires items.

Participants receive prompts throughout the day: These prompts, often referred to as “beeps,” signal participants to answer a short questionnaire on their device.

The survey questions are carefully designed to capture information relevant to the research question. They often use validated scales to measure various psychological constructs, such as mood, stress, social connectedness, or symptoms.

Researchers should consider how long it takes to complete surveys, the frequency of assessments, and the overall burden on participants’ time and attention. Adjustments to the protocol (e.g., reducing survey length or frequency) might be necessary based on pilot participant feedback.

Researchers should assess whether survey items are clear, relevant, and appropriate for the context of participants’ daily lives.

The format of the questions can be open-ended, close-ended, or use scales, depending on the study’s aims. The questionnaires typically include questions about:

  • Current thoughts, feelings, and behaviors: This could include questions about mood or emotions, stress levels, urges, or social interactions.
  • Contextual factors: This may include questions about their physical location, company (who they are with), or activity at the time of the prompt.

Participants’ responses to these surveys are then aggregated and analyzed to identify patterns in their experiences over time.

Sensor data.

In addition to self-reported questionnaires, some EMA studies utilize sensors embedded in smartphones or wearable devices to collect passive data about the participant’s environment and behavior.

This could include data from GPS sensors, accelerometers, microphones, and other sensors that capture information about location, movement, social interactions, and physiological responses.

This sensor data can help researchers gain a richer understanding of the context surrounding participants’ experiences and potentially identify objective correlates of self-reported experiences.

Data management and analysis.

The richness of EMA data requires careful planning and specific analytic approaches to leverage its full potential.

EMA studies, particularly those using mobile devices, can generate large, complex datasets that require appropriate data management and analysis techniques.

Researchers need to plan for data cleaning, handling of missing data, and using statistical methods, such as multilevel modeling (also known as hierarchical linear modeling or mixed-effects modeling), to account for the hierarchical structure of EMA data.

  • Nested Structure: ESM studies yield data where repeated observations (Level 1) are nested within participants (Level 2). This means responses from the same individual are not independent, violating a core assumption of traditional statistical methods like ANOVA or simple regression.
  • Unequal Participation: Participants often contribute different numbers of data points due to variations in compliance, missed signals, or study design. This unequal participation further complicates analysis and necessitates approaches that can accommodate varying numbers of observations per participant.

Multilevel models explicitly account for this nested structure, allowing researchers to partition variance at both the within-person (Level 1) and between-person (Level 2) levels.

This enables accurate estimation of effects and avoids the misleading results that can occur when using traditional statistical methods that assume independence.

Various statistical software packages are available for multilevel modeling, including HLM, Mplus, R, and Stata.

Time-Based Sampling

Time-based sampling in Ecological Momentary Assessment (EMA) or the Experience Sampling Method (ESM) involves collecting data from participants at specific times throughout the day, as opposed to event-based sampling, which collects data when a particular event occurs.

The goal is to obtain a representative sample of a participant’s experiences over time.

There are three main types of time-based sampling schedules:

1. Fixed-interval schedules

Participants are prompted to report on their experiences at predetermined times. This could involve receiving a signal to complete a survey every hour, twice a day (e.g., morning and evening), or once a day.

Fixed-interval schedules allow researchers to study experiences that unfold predictably over time.

For instance, a study on mood changes throughout the workday might use a fixed-interval schedule to capture variations in mood at specific points during work hours.

2. Random-interval schedules

Participants are prompted to report their experiences at random intervals or based on a more complex time-based pattern.

Random interval sampling aims to minimize retrospective recall bias by obtaining a more random and representative sample of a participant’s day.

For example, a study investigating the relationship between stress and eating habits might use a variable-interval schedule to prompt participants to report their stress levels and food intake at unpredictable times throughout the day, capturing a broader range of daily experiences.

3. Time-stratified sampling

This strategy offers a more structured approach to random sampling. It involves dividing the total sampling time frame into smaller, predefined time blocks or strata, and then randomly selecting assessment times within each time block.

This method ensures a more even distribution of assessments across different times of the day while still maintaining some unpredictability.

Here’s how time-stratified sampling works:

  1. Define the time blocks: The researcher first divides the total sampling window, such as a day or a specific period of the day, into smaller time blocks. For example, a study investigating mood fluctuations throughout the day might divide the day into two-hour blocks.
  2. Randomize within blocks: Within each time block, the assessment times are randomly selected. For instance, in the mood study example, the researcher might program the EMA device to prompt participants for an assessment at a random time within each two-hour block.
  3. Ensure coverage: By randomizing within blocks, researchers can ensure that each part of the day or the sampling window is represented in the data, as at least one assessment will occur within each block. This helps reduce the likelihood of missing data for specific times of the day and provides a more comprehensive view of the participant’s experiences.

For example, a researcher studying the association between stress and alcohol cravings among college students might use a time-stratified sampling approach with the following parameters:

  • Sampling window: 8:00 PM to 12:00 AM (4 hours) for seven consecutive days.
  • Time blocks: Two-hour blocks (8:00 PM – 10:00 PM and 10:00 PM – 12:00 AM).
  • Randomization: Participants are prompted twice daily, once at a random time within each two-hour block.

Considerations for Time-Based Sampling:

  • Frequency and timing of assessments: The frequency and timing of assessment prompts should be carefully considered based on the research question and the nature of the phenomenon being studied. For example, studying highly variable states like anxiety might require more frequent assessments compared to studying more stable states. Studies have used assessment frequencies ranging from every 30 minutes to daily assessments, with the choice dependent on the research question and participant burden.
  • Participant burden: Frequent assessments, especially at inconvenient times, can lead to participant burden and potentially affect compliance. Researchers should carefully balance the need for frequent data collection with the potential impact on participants’ daily lives.
  • Reactivity: Participants might adjust their behavior or experiences in anticipation of the prompts, especially with fixed-interval schedules. This reactivity can be mitigated to some extent by using random-interval schedules.
  • Data analysis: Time-based sampling designs require appropriate statistical methods for analyzing data collected at multiple time points, with multilevel modeling being a commonly used approach. The choice of statistical analysis should account for the nested structure of the data (i.e., multiple assessments within participants).

Event-Based Sampling

Event-based sampling, also known as event-contingent sampling, requires participants to complete an assessment each time a predefined event occurs.

This event could be an external event (e.g., a social interaction) or an internal event (e.g., a sudden surge of anxiety).

For example, instructing participants to record details about every cigarette they smoke, including time, location, mood, and social context.

Event-based protocols offer a valuable tool for researchers interested in gaining a deeper understanding of how specific events are experienced and the factors that influence them.

Research Questions

Event-based sampling designs are particularly well-suited for studying specific events or behaviors in people’s daily lives.

Questions focusing on the frequency and nature of events:

  1. How often do specific events occur in daily life? This type of question seeks to understand the prevalence of certain experiences, especially those that might be underreported or difficult to recall accurately through retrospective methods. For instance, an event-based EMA design could be used to track the frequency of:
    • Social interactions exceeding a certain duration,
    • Conflicts or disagreements with colleagues or family members,
    • Instances of craving or substance use,
    • Panic attacks or other anxiety-provoking situations,
    • Headaches or other pain episodes.
  2. What are the characteristics of these events? Beyond mere frequency, event-based EMA allows researchers to explore the qualitative aspects of events, providing a richer understanding of their nature and impact. For example:
    • What emotions are experienced during and after a social interaction?
    • What are the typical antecedents and consequences of a conflict?
    • What coping strategies are employed during a panic attack?

Questions exploring relationships between events and other variables:

  • How do events relate to momentary experiences? Event-based EMA can illuminate how specific events influence thoughts, feelings, and behaviors in real-time. For instance:
    • Does engaging in a challenging work task lead to increased stress or fatigue?
    • Does receiving social support during a stressful event buffer against negative emotions?
    • Does engaging in a pleasant activity, like listening to music, improve mood?
  • How do events predict subsequent well-being? This line of inquiry examines the longer-term impact of events on overall well-being or functioning. For example:
    • Do frequent conflicts at work predict increased burnout or decreased job satisfaction?
    • Does experiencing daily positive events, such as connecting with loved ones, contribute to higher levels of happiness and life satisfaction?

Here are some key characteristics and considerations for event-based protocols:

  • Clear Event Definition: Event-based protocols require a clear definition of the target event to minimize ambiguity and ensure accurate recording. Researchers need to provide participants with specific instructions about what constitutes the event and when to initiate a recording. For example, in a study on smoking, researchers should specify whether a single puff constitutes a smoking event or if participants should only record instances when they smoke an entire cigarette.
  • Participant Initiation: In most cases, participants are responsible for recognizing the occurrence of the event and initiating the assessment. This assumes a certain level of awareness and willingness to interrupt their activity to record data.
  • Event Characteristics: Event-based protocols are suitable for studying events that are:
    • Discrete: Events should have a clear beginning and end, making it easier to determine when to record data.
    • Salient: Events should be noticeable enough for participants to recognize and remember to record them.
    • Fairly Frequent: The event should occur frequently enough to provide sufficient data points for analysis, but not so frequently that it becomes burdensome.
  • Compliance Challenges: Verifying compliance with event-based protocols can be challenging as there’s no way to ensure participants record every instance of the target event. Participants might forget, be unable to record at the moment, or choose not to report certain events.
  • Potential for Bias: The data collected through event-based protocols might be biased toward more memorable, intense, or consciously recognized events. Events that are less salient or occur during periods of distraction might be underreported.

Hybrid Sampling Designs

Hybrid sampling in EMA research combines elements of different sampling designs, such as event-based sampling, fixed-interval sampling, and random-interval sampling, to leverage the strengths of each approach and address a wider range of research questions within a single study.

This approach is particularly valuable when researchers want to capture both the general flow of daily experiences and specific events that might be infrequent or easily missed with purely time-based sampling.

Here are some common ways researchers combine sampling designs in hybrid EMA studies:

Adding a daily diary component to an experience sampling study

Researchers often enhance experience sampling studies with a daily diary component, typically administered in the evening.

While the experience sampling portion provides insights into momentary experiences at random intervals, the daily diary can assess global aspects of the day, such as overall mood, sleep quality, significant events, or reflections on the day’s experiences.

For instance, a study could use experience sampling to assess momentary stress and coping strategies throughout the day and then use a daily diary to measure participants’ overall perceived stress for that day and their use of specific coping strategies across the entire day.

This combination allows researchers to understand how momentary experiences relate to more global daily perceptions. Some studies incorporate both morning and evening diaries to capture experiences surrounding sleep and the transition into and out of the study’s focus time frame.

Incorporating event-based surveys into time-based designs

One limitation of purely random-interval sampling is that it might not adequately capture specific events of interest, especially if they are infrequent or unpredictable.

To address this, researchers can augment time-based protocols with event-based surveys, prompting participants to complete additional assessments whenever a predefined event occurs.

For example, a study on social anxiety could use random-interval sampling to assess participants’ general mood and anxiety levels throughout the day and then trigger an event-based survey immediately after each social interaction exceeding a certain duration, allowing for a more detailed examination of anxiety experiences in social contexts.

This hybrid approach provides a more comprehensive understanding of both the general experience of anxiety and the specific factors that influence it in real-life situations.

Combining time-based designs at different time scales

Researchers can utilize different time-based sampling designs to examine phenomena across different time scales.

For example, a study investigating the long-term effects of a stress-reduction intervention could incorporate weekly assessments using fixed-interval sampling to track changes in overall stress levels.

Additionally, random-interval sampling with end-of-day diaries could be employed to capture daily fluctuations in stress and coping.

Finally, a more intensive experience sampling protocol could be implemented for a shorter period before and after the intervention to assess changes in momentary stress responses.

This multi-level approach allows researchers to gain a comprehensive understanding of how the intervention affects experiences across different time frames, from daily fluctuations to weekly trends.

EMA Protocols

A protocol outlines the procedures for collecting data using the ecological momentary assessment.

It acts as a blueprint, guiding researchers in gathering real-time, in-the-moment experiences from participants in their natural environments.

These protocols differ primarily in how and when they prompt participants to record their experiences.

The optimal choice depends on aligning the protocol with the research question, participant burden considerations, technological capabilities, and the intended data analysis approach.

Example of an EMA Protocol

A study investigating the relationship between daily stress and alcohol cravings might involve the following EMA protocol:

  • Device: Participants are provided with a smartphone app.
  • Sampling: Participants receive prompts randomly five times a day between 5 p.m. and 10 p.m. for one week.
  • Questionnaire: Each questionnaire asks participants to rate their current stress level, alcohol craving intensity, and to indicate whether they are alone or with others.
  • Sensor data: The app also passively collects GPS data to determine the participant’s location at each assessment.

By analyzing the collected data, researchers could examine how stress levels fluctuate throughout the evening, whether being alone or with others influences craving intensity, and if certain locations are associated with higher cravings.

Considerations when choosing a protocol

  • Research Questions: The choice of protocol should be guided by the research questions. If the study aims to understand the general flow of experiences throughout the day, time-based protocols might be suitable. If the goal is to investigate experiences related to specific events, an event-contingent protocol might be more appropriate.
  • Participant Burden: The frequency and timing of assessments can influence participant burden. Researchers should consider the demands of their chosen protocol and balance data collection needs with participant well-being.
  • Feasibility and Technology: The chosen protocol should be feasible to implement with the available technology. For example, event-contingent sampling might require more sophisticated programming or the use of sensors to detect specific events.
  • Data Analysis: The chosen protocol will influence the type of data analysis that can be performed. Researchers should consider their analysis plan when selecting a protocol.

Potential Pitfalls

By anticipating and addressing these potential pitfalls, EMA researchers can enhance the rigor, validity, and ethical soundness of their studies, contributing to a richer understanding of human experiences and behavior in everyday life.

  • Participant Burden: Requiring participants to respond to frequent prompts throughout the day can be burdensome and lead to decreased compliance or inaccurate data.
    • To mitigate this, researchers must find a balance between collecting sufficient data and minimizing participant burden.
    • Researchers should carefully consider the number of study days, the frequency of daily assessments (“beeps”), and the length and complexity of the surveys.
    • Offering incentives can also encourage participation and completion.
  • Technology Issues: EMA studies often rely on technology, which can introduce technical challenges.
    • Researchers need to ensure the chosen technology is compatible with participants’ devices and operating systems.
    • Signal delivery failures, such as notifications not appearing or calls going unanswered, need to be addressed.
    • Researchers should have contingency plans in case of system crashes or data loss.
  • Data Quality: EMA data is susceptible to various threats to data quality, including:
    • Reactivity: Participants may alter their behavior or responses due to the awareness of being monitored. Researchers should be mindful of this and consider ways to minimize reactivity, such as using a less intrusive assessment schedule.
    • Response Bias: Participants may develop patterns of responding that do not reflect their true experiences (e.g., straightlining or acquiescence bias). Randomizing item order and offering a range of response options can help mitigate this.
    • Missing Data: Participants might miss assessments due to forgetfulness, inconvenience, or technical issues. Researchers should establish clear guidelines for handling missing data and consider using statistical techniques that account for missingness.
  • Sample Bias: Participants who volunteer for and complete EMA studies may differ systematically from those who do not, introducing selection bias.
    • Researchers should be aware of this possibility and consider factors that might influence participation, such as age, occupation, comfort with technology, and privacy concerns.
  • Ethical Considerations: Collecting data in naturalistic settings raises ethical considerations related to privacy, data security, and informed consent, especially when dealing with sensitive information.
    • Researchers must obtain informed consent, ensure data confidentiality, and address potential risks to participants’ privacy and well-being.
  • Data Analysis: Analyzing EMA data requires specialized statistical techniques, such as multilevel modeling, to account for the nested structure of the data (repeated measures within individuals). Researchers should be familiar with these techniques or collaborate with a statistician experienced in analyzing EMA data.
  • Formulating Research Questions: The dynamic nature of EMA data requires researchers to formulate specific research questions that differentiate between person-level and situation-level effects. Failure to do so can lead to ambiguous findings and misinterpretations.

Managing Missing Data

Missing data is an inherent challenge in experience sampling research. By understanding the nature and mechanisms of missingness, researchers can make informed decisions about study design, data cleaning, and statistical analysis.

Unlike cross-sectional studies, where missing data might involve a few skipped items or participant dropouts, daily life studies often grapple with substantial missingness across various dimensions.

Employing appropriate strategies to minimize, manage, and model missing data is crucial for enhancing the validity and reliability of EMA findings.

There are several strategies for handling missing data in EMA research, each with implications for data analysis and interpretation:

  1. Design Considerations for Minimizing Missingness:
    • User-Friendly Design: Employing an intuitive and convenient survey system, as well as clear instructions and reminders, can enhance participant engagement and minimize avoidable missingness.
    • Strategic Sampling Schedule: Carefully considering the frequency and timing of assessments can reduce participant burden and improve response rates.
    • Incentivizing Participation: Appropriate incentives, such as monetary compensation or raffle entries, can motivate participants to respond consistently.
  2. Data Cleaning and Identifying “Screen Outs”:
    • Detecting Random Responding: Identifying and addressing patterns of inconsistent or nonsensical responses, such as using standard deviations across items or examining responses to related items, can improve data quality.
    • Establishing Exclusion Criteria: Developing clear guidelines for excluding participants or assessment occasions based on pre-defined criteria, such as low response rates or technical errors, ensures data integrity. This might involve setting thresholds for low response rates, identifying technical errors, or flagging suspicious response patterns
  3. Statistical Techniques for Handling Missingness:
    • Full-Information Maximum Likelihood (FIML) and Multiple Imputation: These advanced statistical techniques can handle missing data effectively, particularly in the context of multilevel modeling, which is commonly used in EMA research. These methods can provide relatively unbiased parameter estimates, even with complex missing data patterns.
    • Modeling Time: It is important to consider the role of time in EMA analyses. Depending on the research question, time can be treated as a predictor, an outcome, or incorporated into the model structure (e.g., autocorrelated residuals). However, they also acknowledge that time is often omitted in practice, particularly in intensive, within-day EMA studies, where random sampling is assumed to capture a representative sample of daily experiences.

Implications for Data Analysis and Interpretation:

  • Bias: Perhaps the most concerning implication of missing data is its potential to introduce bias into the findings, particularly if the missingness is systematically related to the variables under investigation. For example, if individuals experiencing high levels of stress are more likely to skip surveys, the results might underestimate the true relationship between stress and other variables.
  • Reduced Power: Missing data, especially if substantial, can reduce the study’s statistical power, making it more challenging to detect statistically significant effects. This means that real effects might be missed due to the reduced ability to discern them from random noise.
  • Interpretational Challenges: The often complex and multifaceted nature of missing data in EMA research can complicate the interpretation of findings. When the reasons behind the missingness are unclear, drawing firm conclusions about the relationships between variables becomes challenging. Researchers should be cautious in their interpretations and transparent about the limitations posed by missing data.

The Trade-off Between Ecological Validity and Reactivity

Ecological momentary assessment (EMA) research involves a delicate balancing act. Researchers aim for ecological validity by capturing experiences in their natural habitat, but must remain vigilant about reactivity and its potential to skew findings.

By understanding the factors that influence reactivity and strategically designing studies to mitigate it, researchers can harness the power of EMA to illuminate the nuances of human behavior and experience in the real world.

Ecological Validity: Capturing Life as It Happens

  • A primary goal of EMA is to achieve high ecological validity – the extent to which findings can be generalized to real-world settings.
  • Traditional research often relies on laboratory studies or retrospective self-reports, both of which can suffer from artificiality and recall bias.
  • EMA addresses these limitations by collecting data in participants’ natural environments, as they go about their daily lives. This in-the-moment assessment provides a more authentic window into people’s experiences and behaviors.
  • EMA is well-suited to studying phenomena that are context-dependent or influenced by situational factors.

Reactivity: The Observer Effect

  • Reactivity, a potential pitfall of EMA, refers to the phenomenon where the act of measurement itself influences the behavior or experience being studied.
  • Repeatedly prompting participants to reflect on their experiences might alter those experiences. For instance, asking individuals to track their mood multiple times a day could make them more self-aware and potentially change their emotional patterns.
  • Self-monitoring can be a component of behavior change interventions, further highlighting the potential for reactivity in EMA designs.

Navigating the Trade-off

Reactivity is not inevitable in EMA studies. Several factors can influence its likelihood:

  • Focus on behavior change: Reactivity is more likely when participants are actively trying to modify the target behavior. If the study focuses solely on observation and not on intervention, reactivity might be less of a concern.
  • Timing of recording: Recording a behavior before it occurs (e.g., asking participants if they intend to smoke in the next hour) can increase reactivity. Focusing on past behavior minimizes this risk.
  • Number of target behaviors: Assessing a single behavior repeatedly might heighten participants’ awareness and influence their actions. Studies tracking multiple behaviors or experiences are less likely to be reactive.

Researchers can employ strategies to minimize reactivity:

  • Ensuring anonymity and confidentiality: Assuring participants that their data will be kept private can reduce concerns about social desirability bias.
  • Framing the study objectives neutrally: Presenting the study goals in a way that does not imply a desired outcome can minimize participants’ attempts to control their responses.
  • Using a less intrusive assessment schedule: Reducing the frequency or duration of assessments can reduce participant burden and minimize self-awareness.

Ethical Considerations

Using intensive, repeated assessments in daily life research, while valuable for understanding human behavior in context, raises important ethical considerations.

Mitigating Participant Burden:

Participant burden refers to the effort and demands placed on participants due to the repeated nature of data collection, potentially impacting compliance and data quality.

Several strategies can be used to minimize the potential burden associated with frequent assessments:

  1. Optimizing Assessment Design:
    • Limiting survey length: Keeping surveys brief (ideally under 5-7 minutes) and using concise items is crucial.
    • Strategic sampling frequency: Finding a balance between data density and participant tolerance is key. While no definitive guidelines exist, 5-8 assessments per day might strike a reasonable balance for many studies. However, factors like survey length, study duration, and participant characteristics should guide these decisions.
    • Respecting participant time: Allowing participants to choose or adjust assessment windows (e.g., avoiding early mornings or late nights) can enhance compliance and minimize disruption.
  2. Utilizing Technology Thoughtfully:
    • “Livability functions”: Employing devices and apps that allow participants to mute or snooze notifications when necessary can prevent unwanted interruptions during sensitive situations.
    • Minimizing intrusiveness: Opting for familiar technologies (e.g., participants’ own smartphones) and user-friendly interfaces can reduce the burden of learning new systems and integrating them into daily routines.
  3. Open Communication and Support:
    • Clear instructions and expectations: Providing comprehensive information about the study’s demands and procedures during the consent process and throughout data collection is essential. Anticipate common participant questions (e.g., regarding missed assessments, technical issues, study duration) and providing clear answers.
    • Regular check-ins: Maintaining contact with participants during the study (e.g., through emails or brief calls) can help identify and address potential issues, provide support, and reinforce engagement.
    • Transparency and feedback: Offering participants insights into the study’s goals and findings, as well as acknowledging their contributions, can foster a sense of collaboration and value.

Ensuring Informed Consent:

The need for robust informed consent procedures that go beyond traditional approaches to address the unique ethical challenges of intensive, repeated assessments:

  • Explicitly Addressing Burden: The consent process should clearly articulate the expected time commitment, frequency of assessments, and potential disruptions associated with study participation. Researchers should be transparent about the potential for burden and fatigue, even when using strategies to minimize them.
  • Flexibility and Control: Participants should be informed of their right to decline or reschedule assessments when necessary, without penalty. Emphasizing participant autonomy and control over their involvement is paramount.
  • Data Security and Privacy: Given the sensitive nature of data often collected in daily life research, the consent process must clearly outline data storage procedures, security measures, and plans for de-identification or anonymization to ensure participant confidentiality.
  • Addressing Reactivity Concerns: While reactivity to repeated assessments might be less prevalent than often assumed, the consent process should acknowledge this possibility and explain any measures taken to mitigate it.
  • Ongoing Dialogue: Informed consent should be viewed as an ongoing process rather than a one-time event. Researchers should create opportunities for participants to ask questions, express concerns, and receive clarification throughout the study.

Reading List

Hektner, J. M. (2007). Experience sampling method: Measuring the quality of everyday life. Sage Publications.

Rintala, A., Wampers, M., Myin-Germeys, I., & Viechtbauer, W. (2019). Response compliance and predictors thereof in studies using the experience sampling method. Psychological Assessment, 31(2), 226–235. https://doi.org/10.1037/pas0000662

Trull, T. J., & Ebner-Priemer, U. (2013). Ambulatory assessmentAnnual review of clinical psychology9(1), 151-176.

Van Berkel, N., Ferreira, D., & Kostakos, V. (2017). The experience sampling method on mobile devices. ACM Computing Surveys (CSUR)50(6), 1-40.

Examples of ESM Studies

Bylsma, L. M., Taylor-Clift, A., & Rottenberg, J. (2011). Emotional reactivity to daily events in major and minor depression. Journal of Abnormal Psychology, 120(1), 155–167. https://doi.org/10.1037/a0021662

Geschwind, N., Peeters, F., Drukker, M., van Os, J., & Wichers, M. (2011). Mindfulness training increases momentary positive emotions and reward experience in adults vulnerable to depression: A randomized controlled trial. Journal of Consulting and Clinical Psychology, 79(5), 618–628. https://doi.org/10.1037/a0024595

Hoorelbeke, K., Koster, E. H. W., Demeyer, I., Loeys, T., & Vanderhasselt, M.-A. (2016). Effects of cognitive control training on the dynamics of (mal)adaptive emotion regulation in daily life. Emotion, 16(7), 945–956. https://doi.org/10.1037/emo0000169

Shiffman, S., Stone, A. A., & Hufford, M. R. (2008). Ecological momentary assessmentAnnu. Rev. Clin. Psychol.4(1), 1-32.

Kim, S., Park, Y., & Headrick, L. (2018). Daily micro-breaks and job performance: General work engagement as a cross-level moderator. Journal of Applied Psychology, 103(7), 772–786. https://doi.org/10.1037/apl0000308

Shoham, A., Goldstein, P., Oren, R., Spivak, D., & Bernstein, A. (2017). Decentering in the process of cultivating mindfulness: An experience-sampling study in time and context. Journal of Consulting and Clinical Psychology, 85(2), 123–134. https://doi.org/10.1037/ccp0000154

Steger, M. F., & Frazier, P. (2005). Meaning in Life: One Link in the Chain From Religiousness to Well-Being. Journal of Counseling Psychology, 52(4), 574–582. https://doi.org/10.1037/0022-0167.52.4.574

Sun, J., Harris, K., & Vazire, S. (2020). Is well-being associated with the quantity and quality of social interactions? Journal of Personality and Social Psychology, 119(6), 1478–1496. https://doi.org/10.1037/pspp0000272

Sun, J., Schwartz, H. A., Son, Y., Kern, M. L., & Vazire, S. (2020). The language of well-being: Tracking fluctuations in emotion experience through everyday speech. Journal of Personality and Social Psychology, 118(2), 364–387. https://doi.org/10.1037/pspp0000244

Thewissen, V., Bentall, R. P., Lecomte, T., van Os, J., & Myin-Germeys, I. (2008). Fluctuations in self-esteem and paranoia in the context of daily life. Journal of Abnormal Psychology, 117(1), 143–153. https://doi.org/10.1037/0021-843X.117.1.143

Thompson, R. J., Mata, J., Jaeggi, S. M., Buschkuehl, M., Jonides, J., & Gotlib, I. H. (2012). The everyday emotional experience of adults with major depressive disorder: Examining emotional instability, inertia, and reactivity. Journal of Abnormal Psychology, 121(4), 819–829. https://doi.org/10.1037/a0027978

Van der Gucht, K., Dejonckheere, E., Erbas, Y., Takano, K., Vandemoortele, M., Maex, E., Raes, F., & Kuppens, P. (2019). An experience sampling study examining the potential impact of a mindfulness-based intervention on emotion differentiation. Emotion, 19(1), 123–131. https://doi.org/10.1037/emo0000406

Print Friendly, PDF & Email

Olivia Guy-Evans, MSc

BSc (Hons) Psychology, MSc Psychology of Education

Associate Editor for Simply Psychology

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.


Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

h4 { font-weight: bold; } h1 { font-size: 40px; } h5 { font-weight: bold; } .mv-ad-box * { display: none !important; } .content-unmask .mv-ad-box { display:none; } #printfriendly { line-height: 1.7; } #printfriendly #pf-title { font-size: 40px; }