May 12, 2025

AI: Software Options (Integrating Your Model)

0

You’ve journeyed through planning your AI, carefully crafting your datasets, and even training your AI model using platforms like Teachable Machine or others. That’s fantastic! But a trained model sitting on a website isn’t very useful on its own.

The next exciting step is to Integrate your AI model into your actual application (likely in App Inventor or Thunkable) so it can start making predictions and taking actions in the real world!

@

Lesson Topic: AI: Software Options (Integrating Your Model)

Section 1: Why Integrate? (From Model to Action!)

Think of your trained AI model as a smart brain you’ve educated. Integration is the process of connecting that brain to the body – your mobile app – so it can actually do something useful.

Integration allows your app to:

  1. Send New Data: Take input from the user or phone sensors (like a photo from the Camera, a sound from the Microphone, or text typed by the user).
  2. Get Prediction: Send this input to your trained AI model and receive its prediction (e.g., “This image is ‘Healthy Leaf’,” “This sound is ‘Stress Detected’,” “This text is ‘Positive Sentiment'”).
  3. Take Action: Based on the prediction received, your app can then perform a specific action (e.g., display relevant advice, play a warning sound, categorize feedback, navigate to a specific screen).

This is where your AI goes from being an experiment to being a functional part of your solution!

Section 2: Choosing Your Integration Path

How you integrate your model depends mainly on two things:

  1. Where did you TRAIN your model? (e.g., Teachable Machine, ML4K, App Inventor Extension, Ximilar)
  2. Where are you BUILDING your app? (e.g., App Inventor, Thunkable, maybe Python/Streamlit for advanced teams)

Let’s look at the most common scenarios for Technovation teams:

Section 3: How to Connect (Integration Methods)

Here are the likely paths based on your choices:

  • Scenario A: Trained with Teachable Machine → Building with App Inventor
    • Best Option: Use the Teachable Machine Image Classifier Extension. The lesson mentions one by Fabiano Oliveira (you might need to search for the specific .aix file online or via Technovation resources).
    • How:
      1. Download the .aix extension file.
      2. In App Inventor, go to the Palette on the left, scroll down to “Extension,” click “Import extension,” and upload the .aix file.
      3. Drag the new “TeachableMachineClassifier” component onto your Viewer (it will be non-visible).
      4. In the Properties for this component, you’ll likely need to paste the sharing link for your trained Teachable Machine model (get this from your TM project after training and exporting to the cloud).
      5. In the Blocks editor, use the blocks provided by the extension. Typically, you’ll use a block to send an image (from the Camera or ImagePicker component) to the classifier, and another block (an event handler like When ClassificationDone) to receive the results (the predicted label and confidence score).
  • Scenario B: Trained with Ximilar → Building with Thunkable
    • Method: Use Ximilar’s API (Application Programming Interface).
    • How:
      1. Get your API Key from your Ximilar account/project.
      2. In Thunkable, add the Web API component (it’s non-visible, find it in the Blocks tab under “Advanced”).
      3. Configure the Web API component’s properties: Set the URL to the Ximilar endpoint for your classification model, set the Method to POST, and add your API Key in the Headers section.
      4. In the Blocks, when you want to classify an image (e.g., after the user takes a photo with the Camera), you’ll need to prepare the image data (often converting it to Base64 format) and send it in the ‘Body’ of the Web API’s POST block.
      5. Use the Web_API1’s then do block to get the response back from Ximilar, which will contain the prediction results (likely in JSON format, which you’ll need to parse).
  • Refer back to the Ximilar/Thunkable tutorial mentioned in Lesson 5.5 for detailed steps, as API integration can be tricky.
  • Scenario C: Trained with ML4K → Building with App Inventor/Python
    • ML4K usually provides specific instructions or code snippets within your project page there. You might get an API key or need to use specific ML4K blocks/libraries. Follow their documentation for integration.
  • Scenario D: Trained within App Inventor (using AI Extensions) → Building with App Inventor
    • This is the most straightforward! Since the model is part of an App Inventor component already, you just use the blocks associated with that specific AI component (like PersonalImageClassifier or PersonalAudioClassifier) directly in your app’s logic to make predictions.
  • Scenario E: Trained with Teachable Machine → Building a Web Page (More Advanced)
    • If you want a simple web demo, use the TensorFlow.js code snippet from Teachable Machine. Save it as an .html file and open it in a browser. It usually uses the webcam. The lesson links to modified code if you want to allow image uploads instead. This doesn’t directly integrate with App Inventor/Thunkable but can be a way to showcase your model online. Python/TensorFlow options are much more advanced.

Section 4: Let’s Integrate! (Activity – 60+ Minutes)

Time to connect your AI brain to your app body!

Your Mission: Start integrating your trained AI model into your chosen app platform (App Inventor or Thunkable) and make it trigger an action.

Task Steps:

  1. Identify Your Path: Which scenario from Section 3 matches your team’s tools (Training Platform + App Platform)?
  2. Find Your Guide: Locate the specific instructions for your path (e.g., tutorial for the App Inventor TM extension, Ximilar API docs, ML4K project page instructions, AI extension blocks in App Inventor).
  3. Start Your App Project: Create a new project or open your existing one in App Inventor/Thunkable.
  4. Add/Connect the Model: Follow your guide to:
    • Import the extension (App Inventor).
    • Configure the Web API component (Thunkable/Ximilar).
    • Use the built-in AI component blocks (App Inventor Extensions).
    • Set up API keys or model links as needed.
  5. Code the Interaction: This is the core part! Add blocks/code to:
    • Get Input: Use components like Camera, SoundRecorder, ImagePicker, or TextBox to get data from the user or sensors.
    • Send to Model: Use the appropriate blocks (from the extension, Web API component, etc.) to send this input data to your loaded/connected AI model.
    • Receive Result: Use the event handler block (like When ClassificationDone or Web_API’s then do) to get the prediction result (usually a label like “Healthy” and a confidence score like 0.95).
    • !!! TAKE ACTION !!!: Use if/else if/else blocks to check the prediction result and make your app DO something meaningful!
      • Example (App Inventor with TM Extension): `when TeachableMachineClassifier1.ClassificationDone | label, confidence | do -> if label = “Ripe Mango” and confidence > 0.8 then set ResultLabel.Text = “Ready to eat!” else if label = “Unripe Mango” then set ResultLabel.Text = “Wait a few more days.” else set ResultLabel.Text = “Not sure, try again.”*

Be Patient: Integration can involve technical steps. Follow your chosen guide carefully. It might take time!

Section 5: Check Your Plan & Get Feedback (Reflection)

Once you have a basic integration working (even if it just displays the prediction):

  • Does it Work? Celebrate getting the connection made! This is a big step!
  • Project Plan Check: Does this integration process affect your project timeline? Update your task list and schedule if needed.
  • User Feedback: As soon as possible, let users try the integrated feature! Don’t just ask “Does the AI work?”. Ask “Is the action the app takes based on the prediction helpful? Is it clear? Is it accurate enough?”. Get feedback on the entire experience.

Section 6: Quick Review (Key Terms)

  • Software: Computer programs or applications.
  • Extensions (.aix for App Inventor): Add-on software that gives App Inventor new capabilities (like connecting to Teachable Machine).
  • Integration: Connecting different software components (like your AI model and your app) so they work together.
  • API (Application Programming Interface): A set of rules allowing different software pieces to communicate (used by Ximilar, ML4K).
  • Code Snippet: A small piece of pre-written code provided by a platform (like Teachable Machine for web integration).

Section 7: Advanced Tools & Resources

Remember, there are more advanced ways to build and deploy AI (like using Python libraries directly in Google Colab, or tools like DialogFlow for chatbots), but focus first on mastering the integration method that matches your chosen Technovation tools.

ADDITIONAL RESOURCES

Marshmallow sorter using Teachable Machine and Coral.

Check out these videos on more advanced AI tools!

Conclusion

Integrating your AI model is where your project can really start to feel “intelligent” and powerful. It might involve some technical hurdles depending on the platforms you chose, but carefully following the specific instructions for your path (App Inventor extension, Ximilar API, etc.) is key.

Focus on getting the basic connection working first, then making the app take a meaningful action based on the prediction. Don’t hesitate to use the help resources or ask mentors if you get stuck!

Leave a Reply

Your email address will not be published. Required fields are marked *