Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
.
After the setup is done, you can view the connector setup UI at: .
Kindly complete the prerequisites before proceeding with the guide.
In the terminal window type the following command to run FarmStack setup:
Click on the Add New + button, which will open a dialog to setup connector.
Select the connector you want to run. Here we'll run the Google Sheets connector.
Click on Next which will take you to the configure tab.
In the configure tab, enter a google email you would like to share the data with. You can also change the sheet title here.
Clicking Next will take you to the connect tab.
In the connect tab, you can see the provider connectors available to connect.
Select the Video List Provider (DG - Coco) from the list of providers and click Finish.
On the homepage, the table will update to show the currently running connectors.
From this table you can open the homepage of the connector or delete the connector by clicking he red bin icon.
Open the homepage of the App and click on the Sync Data button to fill the data in Google Sheet.
Open the homepage of the App and click on the Sync Data button to generate a csv. Once the CSV file is generated you click on Download option to download the CSV file.
FarmStack is a reference implementation of an open and interoperable data sharing protocol in agriculture sector.
FarmStack is required because:
Relevant farmer profile including farmer activity not available
Lack of trust on misuse or under utilisation of data with a centralised data warehouse
Need to comply with evolving data policy and privacy safeguarding measures
Existing data integration tools lack the customisation
Requirements:
Python3.6+
Docker Desktop (for MacOS and Windows with wsl2).
Docker and Docker Compose for ubuntu
Or you can follow one of our step-by-step guide to setup your own connectors:
FarmStack enables network of data providers and consumers through a suite of products and functionalities:
Share data directly without any third party through trusted peer to peer (p2p) connector.
Empower the data provider to restrict usage of data through usage policies.
Give control of data back to the farmers by managing consent using data wallet.
Enable entities to create plugins to make their data discoverable (description of data).
FarmStack is the sum total of all the peer to peer connectors and associated usage policies.
We would love your contribution to this project no matter big or small.
@TODO - Add Roadmap
This tutorial will guide you through the process of installing and running a Video Library Data Consumer. The video library can be found here: .
After the setup is complete, in the browser window, open the installer frontend by typing .
In browser, open: and follow the instructions
For more details see .
You can see FarmStack in action by for fetching data from .
For more information visit or .
To get started see our .
Farmstack is licensed under Apache License 2.0. See file for licensing information
This tutorial will guide you through the process of running an example self managed connector end-to-end setup. Kindly complete the prerequisites before following this guide.
In the terminal window type the following command to run FarmStack setup:
Navigate to New Connector tab
Give the provider a unique name, for example, Test Provider 1
and click Next
Give the consumer a unique name, for example, Test Consumer 1
and click Next
Verify the details and start the connection by clicking Set up Connection
button.
Depending on your system resources and internet speed this step could take anywhere from a few seconds to a few minutes. You can check the progress in the terminal window.
When the connector setup is complete, open the status tab and click View transferred data
link to see your data.
Kindly wait a couple of minutes for the contract negotiation process of provider and consumer to complete before they can start sharing the data.
After the negotiation, the provider will start streaming data to the consumer. Kindly refresh to see the data shared into the consumer application.
This completes the tutorial for Setting up a Managed connector with usage control example. If you face any issue while running the self managed connector, kindly open a new issue in the Github repository and our experts will guide you.
After the setup is complete, in the browser window, open the installer frontend by typing .
Kindly follow the previous tutorial to locally setup and deploy FarmStack before proceeding with this next step, if not done already.
Clone the FarmStack repository and open in terminal, using following commands:
Open prepareConsumerApp.sh
in scripts
folder in your favorite editor.
Edit the following variable to according to your application:
You can also edit the parameters for the example configuration according to your requirements, but it is advised to leave these variables untouched, unless you know what you're doing.
In the terminal, execute the script from farmstack-open
directory:
This script will create a docker image for your application and modify the required usage control parameters in the example-provider-routes.xml
file.
In the terminal window type the following command to run FarmStack setup:
Follow the steps to create your connectors, give a unique name to your connectors such as cities-provider and cities-consumer.
Start the connection by clicking Setup Connection Button.
When the connector setup is done click View transferred data link to see your data. Kindly wait a couple of minutes for the contract negotiation process of provider and consumer to complete before they can start sharing the data.
This completes the tutorial for running the dockerized application with the consumer connector. If you face any issue while running your consumer app kindly open a new issue in the github repository and our experts will guide you.
Running this command will install Docker automatically on Linux, if it is not available.
Install Docker on your system according to OS:
For Ubuntu, also install docker-compose:
Install the dependencies on ubuntu:
Clone the repository and open it.
Run the setup.py file using python3
This tutorial will describe how to run the dockerized application with the connector. This tutorial does not contain information on dockerization of the app. Kindly according to your app.
You can find the sample-nodejs application used in this tutorial .
After the setup is complete, in the browser window, open the installer frontend by typing .
After the setup is done, you can view the connector setup UI at: .
Thank you for your interest in contributing to FarmStack, currently we are building our contribution guidelines. Meanwhile you can contact us on our and .
Watch this space for super exciting updates. Our bots are already hard at work using GPT-3 to create this page for you.
Watch this space for super exciting updates. Our bots are already hard at work using GPT-3 to create this page for you.
This tutorial will describe how to setup FarmStack connector for local csv files. Kindly follow the steps to install FarmStack requirements before proceeding with this setup.
You can follow this process for any file, here we will be using a file called cities.csv present in Downloads directory in home folder.
Clone the FarmStack Github repository on your local machine and open it.
In the FarmStack repository, open example-provide-routes.yaml
file in fs-config/usage-control-example/
directory.
In the route sendData
, replace sample_data1.csv
with filename of your CSV file.
Next, open docker-compose-provider.yaml
file in fs-config/usage-control-example/
directory.
Here comment out the lines which mount sample_data1.csv
and sample_data2.csv
to the docker container, and add the line to mount the cities.csv
to the container, as shown here:
There is no limit on the number of CSV files that can be mounted on connector, follow the same instructions for more csv files.
Save the files and in the terminal window type the following command to run FarmStack setup:
Follow the steps to create your connectors, give a unique name to your connectors such as cities-provider and cities-consumer.
Start the connection by clicking Setup Connection Button.
When the connector setup is done click View transferred data link to see your data. Kindly wait a couple of minutes for the contract negotiation process of provider and consumer to complete before they can start sharing the data.
Watch this space for super exciting updates. Our bots are already hard at work to create this page for you.
This completes the tutorial for CSV file transfer through FarmStack Provider Connector. If you face any issue while setting up your own csv file kindly open a new issue in the github repository and our experts will guide you.
After the setup is complete, in the browser window, open the installer frontend by typing .
This tutorial will describe how to convert your NodeJS application to docker application for compatibility with FarmStack connector.
Create a new file named Dockerfile
in the application folder and open in your favorite text editor.
Copy this code into the Dockerfile
Next we will create a directory to copy all our application code inside the image.
Since we are using node image, node
and npm
are already installed in this image. We just need to copy our package.json
and package-lock.json
files.
Now we will copy your app's source code to docker image.
Our sample-nodejs
app binds to port 8081
so we will map this port to docker daemon
by using EXPOSE
command. If your app uses any other port, kindly change it
In this last step, define the command to start your application. Our sample application starts withnpm start
command. You can also use a shell script file here which executes to start your server.
This should be your final Dockerfile
Create a .dockerignore
file in the same directory as your Dockerfile
. Add the following lines to the file:
First we are going to define the image we are going to use. Here we are using latest alpine image of node to keep the size of NodeJS application small. You can use any image available in the .
Here, we copy package.json
files before copying complete project. This is done to take advantage of Docker layers caching and only install dependencies if the files have changed. You can find more information about this .
You can find the final Dockerfile
.