Tuesday, 7 February 2017

UPLOADING FILES TO AWS S3 BUCKET USING HTML FORM UPLOAD

Amazon Web Service Simple Storage Service(S3) provides users to store unlimited data through online who have AWS account. We can upload a file directly by log-in to AWS account using credentials. But here we are trying to create HTML page in our own system that allows to upload files into AWS S3 bucket using a web browser without log-in to AWS account.
I am going to give detailed explanation and step by step process to build HTML page. To execute the following HTML page, first you need to have your own AWS account. Create your own bucket in S3 to store the file which we upload from this HTML.

The <meta> tag is used to specify the page description and provides meta data about HTML. Meta data will not be displayed on the page.

Create 2 buttons, one to choose a file and another one to upload.



Write a form tag, to redirect the page to S3 bucket after button click. Write an action attribute in the form, to which URL the file must be uploaded, after clicking on the "Upload" button.



From the above image, in action attribute copy your AWS S3 bucket name as shown in figure and rest of the link is same as it is.
Here the key represents the name of a file which we are going to upload.
${filename}” gets the filename of the uploaded file dynamically which we want to upload to S3.
If there are folders or directories in the bucket, we can choose in which bucket we need to upload by changing the value “your_folder_name/${filename}”




AWS Access key ID and Secret access key both are our credentials to authenticate with AWS. These keys are unique for every user. AWS does not allow to retrieve these keys once created. If you lose your keys, you can create new keys rather than retrieval. 




AWS S3 supports some predefined permissions. These are called as Access Control Policies (ACL).
These are some ACLs which we use regularly.

  • private
  • public-read
  • public-read-write

private :-

Only owner gets full rights. No other users has access rights. It applies to both Bucket and Object.

public-read :-

Owner gets full rights. Remaining all users gets access to read only. It applies to both Bucket and Object.

public-read-write :-

Owner gets full rights. Remaining all users gets both read and write access. It applies to both Bucket and Object. This is not generally recommended because of security issues.



Add the link where to redirect after uploading a file successfully, to show success message.



For uploading to S3 bucket, AWS S3 has a policy document. That policy document should be encoded in base64 encryption method.





For generation of base64 encoded value, we have solution with the help of python code. Before that first we need to update of policy document and save it as “policy.txt” in our system.

The details which we provide on policy document should match with the details which we provide in our HTML page. All the attribute keys in policy document should match with our HTML code.



This is the python code to encrypt the policy document to base64 encryption


import base64

input = open("policy.txt", "rb")
policy = input.read()
policy_encoded = base64.b64encode(policy)
print("%s" % (policy_encoded))





















The output will be in the format like b'*********************************************'
You need to copy just the code between the single quotes.
Copied output from python code is the input for our policy attribute. Paste the encrypted code here.

























Same as above, we need some execution to calculate the signature value.





























I will provide you the python code regarding calculation of signature value below.



























import base64
import hmac
from hashlib import sha1
UTF8 = ‘utf-8’
my_secret_key = ‘*****Copy_your_secret_access_key_here*****’
def create_signature(secret_key, encoded_policy):
          new_hmac = hmac.new(bytes(secret_key, UTF8), digestmod=sha1)
          new_hmac.update(bytes(encoded_policy, UTF8))
          signature_base64 = base64.b64encode(new_hmac.digest())
          signature = str(signature_base64, UTF8).strip()
          return signature
if __name__ == ‘__main__’:
          my_encoded_policy = “**Copy_base64_encoded_policy_document_output_of_previous_code**”
          signature = create_signature(my_secret_key, my_encoded_policy)
          print(signature)
Paste the output of above python program in the signature attribute value, as shown below.










The content type is nothing but which type of file we are going to upload. We can restrict users by this attribute from uploading a selected file type. If the content type is like Images/jpeg , then we have to upload only .jpeg image files. The user don’t know what type of file he uploads then for any type of file upload we use “application/octet-stream”.










uploadfromhtml” is my bucket name, and I just checked for files in the bucket and it is empty now.








Run your HTML code from your browser.
Choose a file from your system and here I am selecting a text file called IMP TASK.














Press upload button to upload file into S3 bucket.








Here is the template HTML form page
<html> 
<head>
<title>S3 POST Form</title> 
<meta http-equiv=”Content-Type” content=”text/html; charset=UTF-8″ />
</head>
<body> 
<form action=” "; method=”post” enctype=”multipart/form-data”>
<input type=”hidden” name=”key” value=” ”>
<input type=”hidden” name=”AWSAccessKeyId” value=” ”> 
<input type=”hidden” name=”acl” value=” ”> 
<input type=”hidden” name=”success_action_redirect” value=” ”>
<input type=”hidden” name=”policy” value=" ”>
<input type=”hidden” name=”signature” value=” ”>
<input type=”hidden” name=”Content-Type” value=” ”>
File to upload to S3: 
<input name=”file” type=”file”> 
<br> 
<input type=”submit” value=”Upload File to S3″> 
</form> 
</body>
</html>

Refresh our S3 bucket page and check the file which we just uploaded from our HTML code.






Conclusion :-
Here with the help of HTML template, we connected to S3 bucket and uploaded a file from our own system to S3 bucket, without manually using the AWS S3 console.

Thank You,
Bhanu Teja Kotaiahgari,
Developer Trainee Technical,
MOURI Tech Pvt Ltd.
bhanuteja1227@gmail.com