DEMO 6-1: Deploying a Gradio App on Hugging Face

hfapp_page

1. Hugging Face Introduction

Hugging Face is an artificial intelligence company specializing in the field of natural language processing (NLP), established in 2016. The company is renowned for its open-source Transformers library, which provides a multitude of pre-trained models for various NLP tasks such as text classification, sentiment analysis, machine translation, and question-answering systems. Hugging Face aims to make advanced NLP technology accessible to everyone, whether researchers, developers, or businesses.

Key features and contributions of Hugging Face include:

  • Open Source Community: Hugging Face has an active open-source community where contributors constantly update and improve the models and tools in the library.
  • Transformers Library: This is the core product of Hugging Face, offering a unified interface to use and fine-tune pre-trained Transformer models such as BERT, GPT, T5, etc.
  • Model Sharing Platform: Hugging Face provides a platform for sharing models, where users can upload, share, and discover pre-trained models for specific tasks.
  • Hugging Face Spaces: This is a hosting platform where users can deploy and share their Gradio applications, making them accessible to anyone.
  • Gradio: Hugging Face developed Gradio, an easy-to-use library for quickly creating interactive demos of machine learning models.

2. Sharing Gradio Applications Locally and on the Internet

Gradio offers several ways to share your applications, both locally within a network and on the internet. Here’s how you can do both:

  • Local Network Sharing By default, Gradio apps are set to localhost (127.0.0.1) which means they are only accessible on your local machine. To share your app within your local network, you need to set the Server_name to 0.0.0.0 in your Gradio launch command. This will allow others on the same network to access your app through your local IP address.
  demo.launch(server_name='0.0.0.0')
  • Internet Sharing Gradio allows you to share your app on the internet for a limited time (72 hours) by setting share=True in your launch command. This feature is useful for quickly sharing your app with others over the internet.
  demo.launch(share=True)

3. Deploying on Hugging Face Spaces

Here are the specific steps to deploy a Gradio application on Hugging Face Spaces:

  • First, if you do not have a Hugging Face account, please visit https://huggingface.co/ and click on “Sign Up” to create one.

  • After logging in, click on your avatar and then click on “New Space” below to access this page: https://huggingface.co/new-space

  • Name your Space and choose a license. Select “Gradio” as the Space SDK and choose “Public” if you want everyone to access your Space and underlying code.

  • You will then see a page that provides instructions on how to upload your files to the Space. You need to add a requirements.txt file to specify any Python package dependencies and also upload a main Gradio program, usually named app.py.

  • Once you have pushed your files, the web app will be automatically built, and you can share it with anyone, even embedding your Gradio app on any website.

Here is an operational video:

4. Files to Upload

After creating a new Space, a Readme.md file will be automatically generated. In this file, you can define some basic attributes of the Space, such as the Space name, description, permissions, etc. Additionally, you can specify the Gradio and Python versions used.

Readme.md
  • title: Handwritten Recognition
  • emoji: 📉
  • colorFrom: green
  • colorTo: green
  • sdk: gradio
  • sdk_version: 4.31.4
  • python_version: 3.10.0
  • app_file: app.py
  • pinned: false
  • license: mit

The app.py file contains the main program of the Gradio APP. In this example, a random forest model is built for handwritten digit recognition.

import gradio as gr
import joblib
import numpy as np

model = joblib.load('./data/random_forest_model.pkl')

def predict_minist(image):
    normalized = image['composite'][:, :, -1]
    flattened = normalized.reshape(1, 784)
    prediction = model.predict(flattened)
    print(normalized.shape, np.max(normalized), prediction[0])
    return prediction[0]

with gr.Blocks(theme="soft") as demo:
    gr.Markdown("""
        <center> 
        <h1>Handwritten Digit Recognition</h1>
        <b>jason.yu.mail@qq.com 📧</b>
        </center>
        """)  
    gr.Markdown("Draw a digit and the model will predict the digit. Please draw the digit in the center of the canvas")
    with gr.Row():
        outtext = gr.Textbox(label="Prediction")
    with gr.Row():
        inputimg = gr.ImageMask(image_mode="RGBA", crop_size=(28,28))

    inputimg.change(predict_minist, inputimg, outtext)
demo.launch()

The newly created Space virtual environment comes with Gradio and Python pre-installed, and this app uses the Scikit-learn and joblib libraries, which are not pre-installed in the environment. Therefore, you need to upload a requirements.txt file to record the required libraries for the environment, preferably with specific versions.

requirements.txt
  • joblib
  • Scikit-learn==1.1.2

The published APP can be embedded using the following code:

from IPython.display import IFrame
IFrame(src='https://junchuanyu-handwritten-recognition.hf.space', width=1000, height=800)
from IPython.display import IFrame
IFrame(src='https://junchuanyu-handwritten-recognition.hf.space', width=1000,height=500)