Credit Card Fraud Detection: Build Your Own Model — Part 2
4 min readFeb 9, 2024

Don’t forget to fill in the form once you have completed the tutorial

This is the second part of the tutorial, where we will focus on deployment and execution.

In the previous tutorial, we successfully created a credit card fraud detection model utilizing the Flock SDK for federated learning suitability.

For part two, we will be creating the following files inside the directory credit_card_fraud_detection:

  • .env - for the Pinata key
  • Dockerfile - using a virtual machine to host the run
  • - to communicate with Pinata
  • - a bash script automatically wraps up your source code with all necessary environments to a docker image

Now, let’s dive in.

Step-by-Step Guide


The .env file is crucial for defining variables that should not be exposed to the public. In our case:


These two variables should be kept securely:

  • Create a .env file within the root directory.
  • Go to the Pinata website and create an account.
  • Register a Pinata account here.
  • After logging in, click on the “API Key” tab on the left-hand side, and then click on “New Key”.
  • Here, you will need two pieces of information: the API Key and API Secret, which correspond to the details in the above .env file.


Let me briefly describe what Docker does and why we need it. Docker is a platform that assists developers in packaging, distributing, and managing applications using containers. Containers ensure consistent application performance from development to production, enhancing deployment workflows, resource utilization, and scalability.

Here is the definition of the Dockerfile Docker code. Before proceeding:

  • Create a file named Dockerfile within the root directory.
  • Paste the code.
FROM rodrigoflock/flock_base
WORKDIR /appCOPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .EXPOSE 5000/tcp
CMD [ "python3", "" ]

What we’ve done here is initially using the existing docker image, which is rodrigoflock/flock_base, to set up our own environment. Then, we establish the working directory, copying requirements.txt from local to our Docker image and installing all the Python packages. Following this step, we proceed to copy the rest of the files. The line EXPOSE 5000/tcp instructs the Docker CLI to listen on port 5000, exposing our portal to this port. Lastly, we execute python3

import os
from dotenv import load_dotenv
from pinatapy import PinataPy
load_dotenv()PINATA_API_KEY = os.getenv("PINATA_API_KEY")
def pin_file_to_ipfs(path_to_file):
response = pinata.pin_file_to_ipfs(path_to_file, "/", False)
return response
if __name__ == "__main__":
import sys
if len(sys.argv) != 2:
print("Usage: python <path_to_file>")
path_to_file = sys.argv[1]
response = pin_file_to_ipfs(path_to_file)

For the script, we referenced the official Pinata documentation available here.

To start, we import the PinataPy library using the statement from pinatapy import PinataPy. To safeguard key variables from public exposure, we store them within an environment file. The recommended practice involves using the dotenv library, detailed here. This library facilitates reading from a .env file, allowing retrieval of both PINATA_API_KEY and PINATA_SECRET_API_KEY using os.getenv.

The primary objective of the pin_file_to_ipfs function is to dispatch the model to IPFS via the Pinata gateway. Utilizing Pinata instead of raw IPFS offers several advantages, including a more developed API and efficient speed. Within the script, we employ the if __name__ == "__main__" statement and incorporate the sys library to gather parameters from the command line, which I'll elaborate on in the next section. The path_to_file argument is utilized to acquire the file's path.

set -e
OUTPUT_FILE=$(mktemp)time (tar -czf "${OUTPUT_FILE}.xz" .)# Use the script to pin the file to IPFS
echo "Uploading the compressed image to IPFS.."
response=$(python "${OUTPUT_FILE}.xz")
# Extract the IpfsHash from the response using Python
echo "Extracting IpfsHash.."
ipfs_hash=$(python -c "import json; data = $response; print(data.get('IpfsHash', ''))")
echo "Model definition IPFS hash: $ipfs_hash"
# Clean up the temporary output file
rm "${OUTPUT_FILE}.xz"

This is the bash script, a command line tool script. First, we:

  • Create a file called within the root directory.

Let’s briefly explain our objective here. We aim to create a script that packs all files and uploads them to the Pinata server. The reason behind tarring (also known as zipping) the entire directory is to expedite the process and enable model reuse. Initially, we upload only the model to Pinata, limiting each model’s use to once due to the Docker image being utilized.

We utilize tar -czf to compress all files into mktemp.xz. Subsequently, we execute python to upload the tar file to IPFS via Pinata. This action generates a JSON containing upload details. We solely require the ipfs_hash, which we extract using a Python environment. Lastly, we remove the mktemp.xz file once the process is complete.

Now you can run the file via the terminal with the following command, all file will be upload to IPFS as a bundle.


You are expect to see the following result!

real 0m1.238s
user 0m1.165s
sys 0m0.037s
Uploading the compressed image to IPFS..
Extracting IpfsHash..
Model definition IPFS hash: <your IPFS hash>


Congratulations on successfully building your first model! Here’s what to do next:

  • Github Repo for the model: here
  • Download the client here.
  • Create a task using the hash. For guidance, refer to the user manual.

We highly value your feedback, as it helps us enhance our offerings. Your input will also contribute to the development of future badges and quests. Please take a moment to share your thoughts with us here. Your assistance is greatly appreciated in helping us improve!

Reach out to us by