How to integrate PayUMoney payment gateway in Django

In this article we will see how to integrate PayUmoney payment gateway in your Django app.



Why PayUMoney:

- Easy Integration.

- Fix charges per transaction.

- No account setup fee.

- Great customer care support.



Steps:

- Register with PayUMoney.Com as seller/merchant. Fill your details in the form and submit.

- Select the product Payment Gateway.

how to integrate payumoney payment gateway in django
 - On the next screen select your Business filing status, Business Name and kind of business. Name of Bank Account Holder should be same as Business Name.

- Similarly complete the next few steps and get your salt and keys.

- You will need to provide your PAN card details, Bank Account Details and Address for account to activate.

- Once above details are provided, you will receive a confirmation call from payumoney and you will require to send the document to them. After that your account will be fully active. Once above steps are complete and your account is waiting for documents, we can proceed and work on coding part.



Code:

- Create HTML page for payment. Display all the information on the page. For simplicity, I am not displaying editable information on the page. Amount, Email Id of payer and other details are already fetched from system.

You may add editable fields.

<form action="{{ action }}"  name="payuForm" method="post">    
    {% csrf_token %}        
    <input type="hidden" name="key" value="{{ key }}" />            
    <input type="hidden" name="hash" value="{{ hash }}"/>            
    <input type="hidden" name="txnid" value="{{ txnid }}" />
    <input type="hidden" name="amount" value="{{ amount }}" />
    <input type="hidden" name="email" value="{{ email }}" />
    <input type="hidden" name="firstname" value="{{ firstname }}" />
    <input type="hidden" name="phone" value="{{ phone }}" />
    <input type="hidden" name="productinfo" value="{{ productinfo }}"/>
    <input type="hidden" name="surl" value="{{ surl }}"/>
    <input type="hidden" name="furl" value="{{ furl }}" />
    <input type="hidden" name="service_provider" value="{{ service_provider }}" />

    <div class="form-group">
        <div class="col-md-12 col-sm-12">
            Amount : {{amount}}
        </div>
    </div>
    <div class="form-group">
        <div class="col-md-12 col-sm-12">
            Purpose : {{productinfo}}
        </div>
    </div>
    <div class="form-group">
        <div class="col-md-12 col-sm-12">
            Name : {{name}}
        </div>
    </div>
    <div class="form-group">
        <div class="col-md-12 col-sm-12">
            Email : {{email}}
        </div>
    </div>            
    <div class="form-group">
        <div class="col-md-12 col-sm-12">
            Mobile : {{phone}}
        </div>
    </div>
    <div class="form-group">
        <div class="col-md-12 col-sm-12">
            Transaction ID : {{txnid}}
        </div>
    </div>
    <div class="form-group">
        <div class="col-md-12 col-sm-12" style="padding-bottom:20px;padding-top:20px;">
            After clicking 'Pay Now' button, you will be redirected to PayUMoney Secure Gateway.
        </div>
    </div>
    
    <div class="form-group">
        <div class="col-md-12 col-sm-12">
            <input type="submit" class="btn btn-success btn-sm" value="Pay Now">
        </div>
    </div>
</form>


- Please pay attention to the fields. Must include fields are - key, txnid, hash, amount, email, firstname, phone, productinfo, surl (success url), furl (failure url) and service provider.


how to integrate payumoney payment gateway in django

-  Now write your view. I have provided the appropriate comment above the line in code in view.py file.

from django.shortcuts import render, redirect
from django.http import HttpResponse, HttpResponseRedirect
from django.contrib.auth.decorators import login_required
from django.core.urlresolvers import reverse
from django.contrib import messages
import logging, traceback
import students.constants as constants
import students.config as config
import hashlib
import requests
from random import randint
from django.views.decorators.csrf import csrf_exempt

def payment(request):   
    data = {}
    txnid = get_transaction_id()
    hash_ = generate_hash(request, txnid)
    hash_string = get_hash_string(request, txnid)
    # use constants file to store constant values.
    # use test URL for testing
    data["action"] = constants.PAYMENT_URL_LIVE 
    data["amount"] = float(constants.PAID_FEE_AMOUNT)
    data["productinfo"]  = constants.PAID_FEE_PRODUCT_INFO
    data["key"] = config.KEY
    data["txnid"] = txnid
    data["hash"] = hash_
    data["hash_string"] = hash_string
    data["firstname"] = request.session["student_user"]["name"]
    data["email"] = request.session["student_user"]["email"]
    data["phone"] = request.session["student_user"]["mobile"]
    data["service_provider"] = constants.SERVICE_PROVIDER
    data["furl"] = request.build_absolute_uri(reverse("students:payment_failure"))
    data["surl"] = request.build_absolute_uri(reverse("students:payment_success"))
    
    return render(request, "students/payment/payment_form.html", data)        
    
# generate the hash
def generate_hash(request, txnid):
    try:
        # get keys and SALT from dashboard once account is created.
        # hashSequence = "key|txnid|amount|productinfo|firstname|email|udf1|udf2|udf3|udf4|udf5|udf6|udf7|udf8|udf9|udf10"
        hash_string = get_hash_string(request,txnid)
        generated_hash = hashlib.sha512(hash_string.encode('utf-8')).hexdigest().lower()
        return generated_hash
    except Exception as e:
        # log the error here.
        logging.getLogger("error_logger").error(traceback.format_exc())
        return None

# create hash string using all the fields
def get_hash_string(request, txnid):
    hash_string = config.KEY+"|"+txnid+"|"+str(float(constants.PAID_FEE_AMOUNT))+"|"+constants.PAID_FEE_PRODUCT_INFO+"|"
    hash_string += request.session["student_user"]["name"]+"|"+request.session["student_user"]["email"]+"|"
    hash_string += "||||||||||"+config.SALT

    return hash_string

# generate a random transaction Id.
def get_transaction_id():
    hash_object = hashlib.sha256(str(randint(0,9999)).encode("utf-8"))
    # take approprite length
    txnid = hash_object.hexdigest().lower()[0:32]
    return txnid

# no csrf token require to go to Success page. 
# This page displays the success/confirmation message to user indicating the completion of transaction.
@csrf_exempt
def payment_success(request):
    data = {}
    return render(request, "students/payment/success.html", data)

# no csrf token require to go to Failure page. This page displays the message and reason of failure.
@csrf_exempt
def payment_failure(request):
    data = {}
    return render(request, "students/payment/failure.html", data)


- Constants file content.

PAID_FEE_AMOUNT = 1
PAID_FEE_PRODUCT_INFO = "Message showing product details."
PAYMENT_URL_TEST = 'https://test.payu.in/_payment'
PAYMENT_URL_LIVE = 'https://secure.payu.in/_payment'
SERVICE_PROVIDER = "payu_paisa"


- Config file content.

SALT = 'YOUR SALT'
KEY = 'YOUR KEY'


- Then we need to add URLs to url.py file.

from django.conf.urls import url
from students import views

app_name = "appname"
urlpatterns = [
    url(r'^payment/$', views.payment, name="payment"),
    url(r'^payment/success$', views.payment_success, name="payment_success"),
    url(r'^payment/failure$', views.payment_failure, name="payment_failure"),
]


- Add appropriate content in success.html and failure.html templates.

- Hash string is generated as :

key|txnid|amount|productinfo|firstname|email|udf1|udf2|udf3|udf4|udf5|udf6|udf7|udf8|udf9|udf10
Where udf1 to udf10 are user defined fields which you might want to send as post data from payment form.

- Once 'Pay Now' button is clicked, page is redirected to payumoney site and success or failure page is displayed at the end based on transaction status.



Points to remember:

- Most frequently faced error is 'checksum failed'. Make sure you have included all the fields in your form and hash string.

- Amount should be float and not an integer or string in form.

- Use absolute URL values for surl  and furl .

- It is advisable to use constant and config files.

- If you are using test URL to test payment and receiving error, make sure your test account is activate and use the SALT and KEY of test account. Contact customer care service to activate test account.



Create PayUMoney merchant account.


After your web app is completed, you might want to host it. PythonAnyWhere is the best hosting service provider exclusively for Python-Django hosting.

Hosting Django app on PythonAnyWhere Server for free.


Automatically updating Django website hosted on PythonAnyWhere server with every git push

Until now this is how I use to develop and deploy (update) code on PythonAnyWhere server.

- Make changes in code on my local machine.
- Commit and push the code to remote repository.
- Login to PythonAnyWhere server and start bash terminal.
- Pull the code from remote repository.
- Reload the web app from web tab.


Steps 3 to 5 are time consuming, repetitive and boring. So I thought of eliminating these steps. In this article we will see how can you get rid of these steps and your web site is automatically updated with the code as soon as you push it to remote repository.



Steps:

Get Bitbucket ssh keys:
We are using Bitbucket for source code version control. Since every-time you pull or push the code to remote repository, it prompts for password.

Manually entering the password is one step which we need to remove to automate our process.

Login to your PythonAnyWhere account and open a bash terminal. Go to home directory.


To set ssh keys for your Bitbucket account, follow the steps in this article. Feel free to comment if you are facing any issue here. Once keys are generated add the public key to your account in settings.  


Changing remote URL:
Now to pull/push code on remote repository using these keys, change the git remote URL. Check your current remote URL by running command git remote -v.

If URLs are starting with http or https then you need to add another remote with SSH URL. SSH URL of your repository would look something like username@bitbucket.org:username/repositoryname.git. Add this URL as remote with different name git remote add upstream username@bitbucket.org:username/repositoryname.git.

Now whenever you pull or push the code to remote repository, you will not be asked for password.  


Post-Push git hook on local system:
Although this type of hook is not supported by git but we will create our own git alias which will do the job for us.

So after code is committed and pushed to git repository, we wants to execute step 4 and 5. For this we created a git alias.

Open the .git/config file and paste the below code in it.

[alias]
        xpush = !git push $1 $2 && /home/rana/project-dir/reload.sh


Now instead of running git push origin repo-name, you will run git xpush origin repo-name command.

Every-time you push your code, reload.sh file is executed. Now in reload.sh file we write our code which will pull the code on PythonAnyWhere server and reload the web app. Write below code in reload.sh file and save it. Make this file executable.

#!/bin/bash
sshpass -p paw-password-here ssh user@ssh.pythonanywhere.com '/home/user/project-dir/remote.sh'


For this to work you need to install sshpass in your system or you may enable passwordless login.

So above line in reload.sh file makes an ssh connection to PythonAnyWhere server and executes a shell script located on remote server.  


Shell Script on PAW server:

#!/bin/bash
echo "Starting git pull. Author - Anurag Rana"
cd /home/user/mysite
git pull upstream remote-repo
touch /var/www/www_mysite_com_wsgi.py


Create a file at location /home/user/project-dir/remote.sh. Make this file executable.

Remember in first step above we added one more git remote named upstream. In the 4th line of above script we are pulling code from same remote URL without using password (because we are using ssh keys and this remote is created with SSH URL).

Now reloading the web app is done in 5th line of above script. This works because the server process that handles your web application is watching that file and restarts itself whenever it is modified.  



Testing:
 Now when all the above tasks are completed, lets test the application.

  • Change a file in your code at local machine.
  • Git add and git commit this file.
  • Git push. Remember to use xpush command this time. You will see that after push is done, code is pulled on PythonAnyWhere server and output is printed in terminal on your local.
  • Reload the website and see the changes instantly.


Below is the output when I edited, git added, committed and pushed the file to remote repository.

rana@Brahma: mysite$ git xpush origin mysite 
Password for 'https://anurag8989@bitbucket.org': 
Counting objects: 8, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (7/7), done.
Writing objects: 100% (8/8), 593 bytes | 0 bytes/s, done.
Total 8 (delta 6), reused 0 (delta 0)
To https://anurag8989@bitbucket.org/anurag8989/mysite.git
   53e8a3a..aeaa9fe  mysite -> mysite
<<<<<<:>~ PythonAnywhere SSH. Help @ https://help.pythonanywhere.com/pages/SSHAccess
Starting git pull. Author - Anurag Rana
From bitbucket.org:anurag8989/mysite
 * branch            mysite -> FETCH_HEAD
   53e8a3a..aeaa9fe  mysite -> upstream/mysite
Updating 53e8a3a..aeaa9fe
Fast-forward
 mysite/app1/templates/app1/header.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
 

If something is not working, check if your shell script files are executable and are placed at right path.

If you are stuck at any step, feel free to comment.  


Host your Django project for free on PythonAnyWhere.


Python Script 7: Scraping tweets using BeautifulSoup

Twitter is one of the most popular social networking services used by most prominent people of world. Tweets can be used to perform sentimental analysis.

In this article we will see how to scrape tweets using BeautifulSoup. We are not using Twitter API as most of the APIs have rate limits.

You can download all the pictures of any Instagram user in just few lines of codes. We converted the script into reusable python package to make things easy.


Setup:

Create a virtual environment. If you are not in the habit of working with virtual environments, please stop immediately and read this article on virtual environments first.

Once virtual environment is created and activated, install the dependencies in it.

pip install beautifulsoup4==4.6.0 bs4==0.0.1 requests==2.18.4


Analysing Twitter Web Requests:

Lets say we want to scrape all the tweets made by Honourable Prime Minister of India, Shri Narendra Modi.

Go to the browser, I am using Chrome, press F12 to open the debugging tool.

Now go the the URL https://twitter.com/narendramodi. In the network tab of debugging tool, you will see the response of request made to URL /narendramodi.

Response is an HTML page. We will convert this HTML response into a BeautifulSoup object and will extract the tweets.

python script 7 scraping tweets using beautifulsoup  

If you scroll down the page to load more tweets, you will see more requests being sent where response is not simple HTML but is in JSON format.


python script 7 scraping tweets using beautifulsoup
 

Extracting tweets from HTML content:

First inspect the tweet element on web page. You will see that all the tweets are enclosed in li  HTML tag. Actual tweet text is inside a p  tag which is the descendent of li  tag.

We will first get all the li  tags and then p  tags from each li  tag. Text contained in the p  tag is what we need.  


Code to start with:

# script to scrape tweets by a twitter user.
# Author - ThePythonDjango.Com
# dependencies - BeautifulSoup, requests

from bs4 import BeautifulSoup
import requests
import sys
import json


def usage():
    msg = """
    Please use the below command to use the script.
    python script_name.py twitter_username
    """
    print(msg)
    sys.exit(1)


def get_username():
    # if username is not passed
    if len(sys.argv) < 2:
        usage()
    username = sys.argv[1].strip().lower()
    if not username:
        usage()

    return username


def start(username = None):
    username = get_username()
    url = "http://www.twitter.com/" + username
    print("\n\nDownloading tweets for " + username)
    response = None
    try:
        response = requests.get(url)
    except Exception as e:
        print(repr(e))
        sys.exit(1)
    
    if response.status_code != 200:
        print("Non success status code returned "+str(response.status_code))
        sys.exit(1)

    soup = BeautifulSoup(response.text, 'lxml')

    if soup.find("div", {"class": "errorpage-topbar"}):
        print("\n\n Error: Invalid username.")
        sys.exit(1)

    tweets = get_tweets_data(username, soup)


We will start with start function. First collect the username from command line and then send the request to twitter page.

If there is no exception and status code returned in response is 200 i.e. success, proceed otherwise exit.

Convert the response text into BeautifulSoup object and see if there is any div tag in the HTML with class errorpage-topbar. If yes that means the username is invalid. Although this check is not required because in case of invalid username, 404  status is returned which will be checked in status_code check condition.  


Extract tweet text:

def get_this_page_tweets(soup):
    tweets_list = list()
    tweets = soup.find_all("li", {"data-item-type": "tweet"})
    for tweet in tweets:
        tweet_data = None
        try:
            tweet_data = get_tweet_text(tweet)
        except Exception as e:
            continue
            #ignore if there is any loading or tweet error

        if tweet_data:
            tweets_list.append(tweet_data)
            print(".", end="")
            sys.stdout.flush()

    return tweets_list


def get_tweets_data(username, soup):
    tweets_list = list()
    tweets_list.extend(get_this_page_tweets(soup))


As discussed, we first find out all li  tags and then for each element we try to get tweet text out of that li  tag.

We keep printing a dot on screen every time a tweet is scrapped successfully to show the progress otherwise user may think that script is doing nothing or is hanged.

def get_tweet_text(tweet):
    tweet_text_box = tweet.find("p", {"class": "TweetTextSize TweetTextSize--normal js-tweet-text tweet-text"})
    images_in_tweet_tag = tweet_text_box.find_all("a", {"class": "twitter-timeline-link u-hidden"})
    tweet_text = tweet_text_box.text
    for image_in_tweet_tag in images_in_tweet_tag:
        tweet_text = tweet_text.replace(image_in_tweet_tag.text, '')

    return tweet_text


We sometimes have images inside tweets, we will discard those images as of now. We do this by getting image tags inside tweets and replacing image text by empty string.  


Scrapping more tweets:

So far we were able to get tweets from first page. As we load more pages, when scrolling down, we get JSON response. We need to parse JSON response, which is slightly different.

def get_tweets_data(username, soup):
    tweets_list = list()
    tweets_list.extend(get_this_page_tweets(soup))

    next_pointer = soup.find("div", {"class": "stream-container"})["data-min-position"]

    while True:
        next_url = "https://twitter.com/i/profiles/show/" + username + \
                   "/timeline/tweets?include_available_features=1&" \
                   "include_entities=1&max_position=" + next_pointer + "&reset_error_state=false"

        next_response = None
        try:
            next_response = requests.get(next_url)
        except Exception as e:
            # in case there is some issue with request. None encountered so far.
            print(e)
            return tweets_list

        tweets_data = next_response.text
        tweets_obj = json.loads(tweets_data)
        if not tweets_obj["has_more_items"] and not tweets_obj["min_position"]:
            # using two checks here bcz in one case has_more_items was false but there were more items
            print("\nNo more tweets returned")
            break
        next_pointer = tweets_obj["min_position"]
        html = tweets_obj["items_html"]
        soup = BeautifulSoup(html, 'lxml')
        tweets_list.extend(get_this_page_tweets(soup))

    return tweets_list


First we check if there are more tweets. If yes then we find the next pointer and create the next URL. Once JSON is received, we take out the items_html part and repeat the process of creating soup and fetching tweets. We keep doing this until there are no more tweets to scrap. We know this by looking at the variable has_more_items and min_position in JSON response.



Complete script:

Now all the functions are completed. Let put them together.

Download the complete script from GitHub.



Running the script:

Assuming you have installed dependencies in virtual environment, lets run the script.

(scrappingvenv) rana@Nitro:python_scripts$ python tweets_scrapper.py narendramodi


Downloading tweets for narendramodi
............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
No more tweets returned

Dumping data in file narendramodi_twitter.json
844 tweets dumped.
(scrappingvenv) rana@Nitro:python_scripts$ 
 

You might introduce some wait between requests if you get any rate limit errors.



Dumping data in file:

You might want to dump the data in text file. I prefer dumping data in JSON format.

# dump final result in a json file
def dump_data(username, tweets):
    filename = username+"_twitter.json"
    print("\nDumping data in file " + filename)
    data = dict()
    data["tweets"] = tweets
    with open(filename, 'w') as fh:
        fh.write(json.dumps(data))

    return filename
 

Let us know if you face any issues.



SUBSCRIBE
Please subscribe to get the latest articles in your mailbox.


Recent Posts:






© pythoncircle.com 2018-2019
Contact Us: code108labs [at] gmail.com
Address: 3747 Smithfield Avenue, Houston, Texas