Cooking meals can be difficult, causing many to use cookbooks and online recipes, which results in missing ingredients, nutritional hazards, unsatisfactory meals. Using Augmented Reality (AR) can address this issue, however, current AR cooking applications have poor user interfaces and limited accessibility. This paper proposes a prototype of an iOS application that integrates AR and Computer Vision (CV) into the cooking process. We leverage Google's Gemini Large Language Model (LLM) to identify ingredients based on the camera's field of vision, and generate recipe choices with their nutritional information. Additionally, this application uses Apple's ARKit to create an AR user interface compatible with iOS devices. Users can personalize their meal suggestions by inputting their dietary preferences and rating each meal. The application's effectiveness is evaluated through user experience surveys. This application contributes to the field of accessible cooking assistance technologies, aiming to reduce food wastage and improve the meal planning experience.
翻译:暂无翻译