Researchers in Taiwan asked 150 nutrition students about their use of ChatGPT, an AI chatbot, to help with their studies and future work. They found that students were most likely to use ChatGPT when they trusted it to give accurate information. Interestingly, trust was especially important when students used ChatGPT for real patient care tasks, like analyzing what someone ate. The study shows that for AI tools to be helpful in nutrition work, people need to feel confident the information is correct and reliable.

The Quick Take

  • What they studied: Whether nutrition students actually use ChatGPT for their schoolwork and patient care, and what makes them decide to use it or not
  • Who participated: 150 nutrition students in Taiwan (mostly women, average age 21) who had already tried using ChatGPT at least once
  • Key finding: Trust in the AI tool was the strongest predictor of whether students would actually use it, especially for patient-related tasks. Students who felt confident ChatGPT gave accurate information were much more likely to use it regularly.
  • What it means for you: If you’re considering using AI tools for health or nutrition advice, it’s important to verify the information with trusted sources. For professionals, building confidence in AI tools requires knowing they’re accurate and reliable. However, this study only looked at students, so results may not apply to everyone.

The Research Details

Researchers created an online survey based on a well-known model about how people decide to use new technology. They asked 150 nutrition students in Taiwan about their experiences with ChatGPT, including how much they trusted it, whether they intended to use it, and how often they actually used it for different tasks. The researchers then used a statistical method called path analysis to map out which factors influenced actual use. This method helps show which factors are most important in predicting behavior.

The survey covered two main areas: academic tasks (like studying nutrition facts and preparing for exams) and clinical tasks (like analyzing what patients eat and recommending diets). Students rated their trust in ChatGPT, their intention to use it, and how often they actually used it for each type of task.

This approach is useful because it goes beyond just asking people if they like something—it actually measures whether they use it and why.

Understanding what makes people actually use AI tools is important because many new technologies fail not because they don’t work, but because people don’t trust them enough to try them. By identifying trust as the key factor, this research helps developers and educators know what needs to improve before AI tools can be widely adopted in healthcare and nutrition work.

The study had a reasonable sample size of 150 students and used a recognized scientific model for understanding technology adoption. However, all participants were from Taiwan and were already familiar with ChatGPT, so results may not apply to other countries or people new to AI tools. The study was conducted with students, not practicing professionals, so the findings may differ in real-world healthcare settings. The statistical model explained about 28-45% of why students use ChatGPT, meaning other factors not measured in this study also play a role.

What the Results Show

Trust emerged as the most powerful factor predicting whether students would actually use ChatGPT. For learning tasks (like understanding nutrition concepts), both trust and the intention to use the tool mattered. However, for patient care tasks (like analyzing what someone ate), trust was the only significant factor that predicted actual use.

When looking at specific tasks, students used ChatGPT most often for understanding basic nutrition knowledge and analyzing the calories and nutrients in food records. They used it less often for preparing for professional exams and for evaluating patients’ nutritional needs and making diet recommendations.

The statistical model showed that trust and intention to use explained about 28% of the variation in how much students used ChatGPT for academic tasks, and about 45% for clinical tasks. This means trust is particularly important for patient-related work.

Specifically, for clinical tasks like analyzing food records, trust was a very strong predictor (with a score of 0.523 out of 1.0), suggesting that students will only use AI tools for patient care if they’re confident the tool is reliable.

The study found that students were more hesitant to use ChatGPT for high-stakes tasks like professional exams and patient care decisions compared to general learning. This suggests people are naturally more cautious about using AI when the consequences of errors are greater. The gender distribution (82% female) reflects the nutrition field but wasn’t analyzed separately in this study.

Previous research has shown that ChatGPT can provide useful nutrition information, but this is the first study to examine what actually makes students use it in practice. The finding that trust is crucial aligns with general research on technology adoption—people are more likely to use new tools when they believe they’re reliable. This study extends that knowledge specifically to nutrition education and patient care.

The study only included nutrition students in Taiwan who had already used ChatGPT, so results may not apply to other countries, other healthcare fields, or people completely new to AI tools. The survey was completed at one point in time, so we can’t know if trust and usage change over time. The study didn’t measure whether ChatGPT’s information was actually accurate, only whether students trusted it. Additionally, the statistical model didn’t explain all the reasons why students use or don’t use ChatGPT—other factors like ease of use, time constraints, or personal preferences weren’t fully captured.

The Bottom Line

For nutrition students and professionals: Before using ChatGPT or similar AI tools for patient care, verify the information with established nutrition guidelines and textbooks. For educators: Build trust in AI tools by teaching students how to evaluate AI-generated information critically. For AI developers: Focus on improving accuracy and transparency to build user confidence. Confidence level: Moderate—this is based on one study with a specific population.

Nutrition students and dietitians should care about this research because it shows what factors influence AI adoption in their field. Educators in nutrition programs should pay attention because it suggests trust-building is essential for technology integration. Healthcare administrators considering AI tools should note that trust is crucial for actual adoption. People seeking nutrition advice should be cautious about relying solely on AI without professional verification. This research is less relevant to people in other healthcare fields, though the principles may apply broadly.

Building trust in AI tools typically takes time as people gain experience and see consistent, accurate results. Students in this study who used ChatGPT regularly had higher trust, suggesting that trust develops through repeated positive experiences. However, trust can be quickly lost if the tool provides inaccurate information, so ongoing verification is important.

Want to Apply This Research?

  • Track which nutrition tasks you use AI tools for and rate your confidence in the results (1-10 scale). Also note how often you verify AI-generated nutrition information with professional sources. This helps you identify which areas need more verification and builds awareness of your trust patterns.
  • Start by using AI tools only for learning basic nutrition concepts, then gradually expand to more complex tasks as you build confidence through verification. Create a simple checklist: (1) Get AI suggestion, (2) Verify with reliable source, (3) Rate accuracy, (4) Decide if you’ll use AI for this task again. This builds trust systematically.
  • Weekly: Log which nutrition tasks you used AI for and whether you verified the information. Monthly: Review your verification rate and accuracy findings. Quarterly: Assess whether your trust in AI tools has changed and whether you’re using them for more complex tasks. This long-term tracking shows how trust develops and helps you make informed decisions about when AI tools are appropriate.

This research describes factors that influence how nutrition students use AI chatbots like ChatGPT. It does not validate ChatGPT as a reliable source for nutrition or medical advice. AI tools can make mistakes and should never replace professional medical or nutritional guidance from a licensed healthcare provider. If you have specific nutrition or health concerns, consult with a registered dietitian or physician. This study was conducted with students in Taiwan and may not apply to other populations or professional settings. Always verify AI-generated health information with established medical guidelines and professional sources before making health decisions.