Don’t forget to fill in the form once you have completed the tutorial
This is the second part of the tutorial, where we will focus on deployment and execution.
In the previous tutorial, we successfully created a credit card fraud detection model utilizing the Flock SDK for federated learning suitability.
For part two, we will be creating the following files inside the directory credit_card_fraud_detection
:
.env
- for the Pinata keyDockerfile
- using a virtual machine to host the runpinata_api.py
- to communicate with Pinataupload_image.sh
- a bash script automatically wraps up your source code with all necessary environments to a docker image
Now, let’s dive in.
Step-by-Step Guide
.env
The .env
file is crucial for defining variables that should not be exposed to the public. In our case:
PINATA_API_KEY=
PINATA_SECRET_API_KEY=
These two variables should be kept securely:
- Create a
.env
file within the root directory. - Go to the Pinata website and create an account.
- Register a Pinata account here.
- After logging in, click on the “API Key” tab on the left-hand side, and then click on “New Key”.
- Here, you will need two pieces of information: the API Key and API Secret, which correspond to the details in the above .env file.
Dockerfile
Let me briefly describe what Docker does and why we need it. Docker is a platform that assists developers in packaging, distributing, and managing applications using containers. Containers ensure consistent application performance from development to production, enhancing deployment workflows, resource utilization, and scalability.
Here is the definition of the Dockerfile Docker code. Before proceeding:
- Create a file named
Dockerfile
within the root directory. - Paste the code.
FROM rodrigoflock/flock_base
WORKDIR /appCOPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txtCOPY . .EXPOSE 5000/tcp
CMD [ "python3", "flockCreditCardModel.py" ]
What we’ve done here is initially using the existing docker image, which is rodrigoflock/flock_base
, to set up our own environment. Then, we establish the working directory, copying requirements.txt
from local to our Docker image and installing all the Python packages. Following this step, we proceed to copy the rest of the files. The line EXPOSE 5000/tcp
instructs the Docker CLI to listen on port 5000, exposing our portal to this port. Lastly, we execute python3 torch_example.py
.
pinata_api.py
import os
from dotenv import load_dotenv
from pinatapy import PinataPy
load_dotenv()PINATA_API_KEY = os.getenv("PINATA_API_KEY")
PINATA_SECRET_API_KEY = os.getenv("PINATA_SECRET_API_KEY")def pin_file_to_ipfs(path_to_file):
pinata = PinataPy(PINATA_API_KEY, PINATA_SECRET_API_KEY)
response = pinata.pin_file_to_ipfs(path_to_file, "/", False)
return responseif __name__ == "__main__":
import sys
if len(sys.argv) != 2:
print("Usage: python pinata_api.py <path_to_file>")
sys.exit(1) path_to_file = sys.argv[1]
response = pin_file_to_ipfs(path_to_file)
print(response)
For the pinata_api.py
script, we referenced the official Pinata documentation available here.
To start, we import the PinataPy
library using the statement from pinatapy import PinataPy
. To safeguard key variables from public exposure, we store them within an environment file. The recommended practice involves using the dotenv
library, detailed here. This library facilitates reading from a .env
file, allowing retrieval of both PINATA_API_KEY
and PINATA_SECRET_API_KEY
using os.getenv
.
The primary objective of the pin_file_to_ipfs
function is to dispatch the model to IPFS via the Pinata gateway. Utilizing Pinata instead of raw IPFS offers several advantages, including a more developed API and efficient speed. Within the pinata_api.py
script, we employ the if __name__ == "__main__"
statement and incorporate the sys
library to gather parameters from the command line, which I'll elaborate on in the next section. The path_to_file
argument is utilized to acquire the file's path.
upload_image.sh
#!/bin/bash
set -e
OUTPUT_FILE=$(mktemp)time (tar -czf "${OUTPUT_FILE}.xz" .)# Use the pinata_api.py script to pin the file to IPFS
echo "Uploading the compressed image to IPFS.."
response=$(python pinata_api.py "${OUTPUT_FILE}.xz")# Extract the IpfsHash from the response using Python
echo "Extracting IpfsHash.."
ipfs_hash=$(python -c "import json; data = $response; print(data.get('IpfsHash', ''))")
echo "Model definition IPFS hash: $ipfs_hash"# Clean up the temporary output file
rm "${OUTPUT_FILE}.xz"
This is the bash script, a command line tool script. First, we:
- Create a file called
upload_image.sh
within the root directory.
Let’s briefly explain our objective here. We aim to create a script that packs all files and uploads them to the Pinata server. The reason behind tarring (also known as zipping) the entire directory is to expedite the process and enable model reuse. Initially, we upload only the model to Pinata, limiting each model’s use to once due to the Docker image being utilized.
We utilize tar -czf
to compress all files into mktemp.xz
. Subsequently, we execute python pinata_api.py
to upload the tar file to IPFS via Pinata. This action generates a JSON containing upload details. We solely require the ipfs_hash, which we extract using a Python environment. Lastly, we remove the mktemp.xz
file once the process is complete.
Now you can run the file via the terminal with the following command, all file will be upload to IPFS as a bundle.
./upload_image.sh
You are expect to see the following result!
real 0m1.238s
user 0m1.165s
sys 0m0.037s
Uploading the compressed image to IPFS..
Extracting IpfsHash..
Model definition IPFS hash: <your IPFS hash>
Congratulations!
Congratulations on successfully building your first model! Here’s what to do next:
- Github Repo for the model: here
- Download the client here.
- Create a task using the hash. For guidance, refer to the user manual.
We highly value your feedback, as it helps us enhance our offerings. Your input will also contribute to the development of future badges and quests. Please take a moment to share your thoughts with us here. Your assistance is greatly appreciated in helping us improve!
Reach out to us by
Website: https://flock.io/
Twitter: https://twitter.com/flock_io
Telegram: https://t.me/flock_io_community
Discord: https://discord.gg/ay8MnJCg2W