Monday, July 21, 2014

Is Amazon PaaS-ifying the IaaS ?

The word burst compute can be made synonymous with explosion; perhaps I would like to relate this with respect to the idea / innovation of the t2 family - EC2 instance rather than associating the explosion with the instance capacity; as of today AWS t2 family of instance can handle burst capacity not explosion.

The idea behind the t2 instance type was really cool and was really the need for several small use-cases like
  • corporate website for an small enterprise might run drupal - the traffic to the site would generally be minimal
  • personal blogs - would get occasional higher number of hits while publishing a new post and posting it in the social networks
  • Daily / scheduled data load jobs

The way I feel this type of instance would help you out is like during the situations of " increasing your credit card's limit little bit when your are maxed out ". It completely makes sense for you to have little bit of additional breathing space of credit limit and time to think of the funds to pay the credit card company; rather than cutting you off to abrupt stop with out any options.

All along, the EC2 instance were more like T-Shirt sizes ranging from micro, small, medium, larger, extra-large; Suddenly t2 instance type was a unique creature all together in the ecosystem of EC2 instances. Accumulating the credits during off-time and redeeming the those again when there is a need is really worth every single compute cycle.

t2 family instances give a new dimension to the term "elastic computing & scalability" 


  • Since EC2's launch we have been relating the term scalability in the context of  number of instance count via. auto scaling or manual scaling
  • Changing the instance size from Large to X-Large etc.
  • Adding additional EBS volumes to the instances
There would several situation where you just a need a slight push / bonus to make the job done, increasing the instance count or upgrading the instance size would be a overkill; during that time t2 instances would just be perfect.

Instance sizing measured against socks not t-shirts

We generally specify whether we need a Large or X-Large but where as when we need a socks there would be possibly 2 size viz. children and adults. Socks are good candidates in illustrating the elasticity rather than forcing the the point of one-size fits all more appropriately - FREE SIZE.

Going forward, In my opinion there would be several t2 based instances which help in elasticity scenario by trying to expand itself little bit and only then reporting - I am maxed out.

So today, it is completely worth while to make all the single instance - application to be handled by the t2 based instance and get the good use of it both in terms of performance and cost i.e. move m1.small to t2.small, m1.medium to t2.medium

IMHO, just like in PaaS you generally deploy your app and forget the rest and the PaaS provides takes care of the rest, Amazon EC2 t2 family would PaaS-ify the same way in its IaaS style; your instance would burst out a little by its own to certain extent.

Friday, July 4, 2014

Packt’s celebrates 10 years with a special $10 offer

This month marks 10 years since Packt Publishing embarked on its mission to deliver effective learning and information services to IT professionals. In that time it’s published over 2000 titles and helped projects become household names, awarding over $400,000 through its Open Source Project Royalty Scheme.
To celebrate this huge milestone, from June 26th Packt is offering all of its eBooks and Videos at just $10 each for 10 days – this promotion covers every title and customers can stock up on as many copies as they like until July 5th.



Dave Maclean, Managing Director explains ‘From our very first book published back in 2004, we’ve always focused on giving IT professionals the actionable knowledge they need to get the job done. As we look forward to the next 10 years, everything we do here at Packt will focus on helping those IT professionals, and the wider world, put software to work in innovative new ways.
We’re very excited to take our customers on this new journey with us, and we would like to thank them for coming this far with this special 10-day celebration, when we’ll be opening up our comprehensive range of titles for $10 each.

If you’ve already tried a Packt title in the past, you’ll know this is a great opportunity to explore what’s new and maintain your personal and professional development. If you’re new to Packt, then now is the time to try our extensive range – we’re confident that in our 2000+ titles you’ll find the knowledge you really need , whether that’s specific learning on an emerging technology or the key skills to keep you ahead of the competition in more established tech.’  


More information is available at http://bit.ly/VzuviS

Tuesday, July 1, 2014

Python Boto Code to keep your EC2 instance's Security Group to be in sync with your changing Public IP

I came across a StackOverflow Question about securing and coping with your Public IP changes which is done by ISP as they tend to recycle the IP from their pool of IPs. Generally every time when your public IP changes, chances are you wouldn't connect to your EC2 Instance as you would have enabled ingress access only to your then Public IP address ( unless you want to use 0.0.0.0/0 which is not recommended).

I have tried to put down a small Python Boto Script which would get your Public IP address, sets that your Security group. You can enter your designated "Security Group Name"; then you can schedule it using a CRON process.


Tuesday, June 24, 2014

Preparing for AWS Certified Solutions Architect Certification

The certification interests in AWS is picking up slowly and there are several people who are opting to take up AWS Certification. I recently took up AWS Solutions Architect - Associate Level  Certification and cleared it.

I am writing this blog post in the interest to spread info about the AWS Certification and general tips on how to get prepared for the Certification - Solution Architect - Associate Level. Even before we start the test, we will be taking an NDA - Non Disclosure Agreement that we wouldn't share the questions and stuff; so abiding by that, I write this post to provide tips and pointer of how to prepare. This post is not about the question samples, dumps etc.


  1. First and foremost point, remember it is easy to understand the concepts of AWS than to search online for the dumps of the questions. If you are a hands-on guy on the AWS, that is the sufficient requirement to clear the test.
  2. Remember the Certification Title - Solution Architect so think in the aspects of what is the role of a Solution Architect and prepare accordingly.
  3. AWS - Documentation, AWS - SlideShare Channel, AWS -YouTube Channel are good places to learn about the certification. Again there is nothing like the hands on experience feel and learning you will get out of that.
  4. FAQs for the all services would be a very good place to refresh, recap, cover lot of ground and explore the topics you may need to concentrate.
  5. There are several courses and training materials offered by 3rd Party trainers like Udemy, CloudAcademy  are good; but in my opinion hands-on experience is sufficient to clear the certification exam.
  6. CloudAcademy provides very good quizzes and multiple choice Q&A. But that would cover the entire length and breadth of the AWS Services and Products which would cover topics like costing, size information, restrictions, negative scenarios etc. of which not all may contribute to the Solution Architecture Exam. Again CloudAcademy is a good place to test out skills but don't lose heart if you don't know many of the questions in that. If you are aligned toward the blueprint and curriculum provided by AWS then that is sufficient.
  7. Concentrate on the core services first then go to the add-ons and then deeper in to that.
  8. Think through the Scenarios and use cases; understand when to use what and where and how. Also the "why shouldn't scenario" is also important.
  9. Think in the lines of the Certification titles viz. Solution Architect, Developer, DevOps; know their responsibilities and concentrate on the depth and breadth of the AWS Services. 
  10. Look into the Sample Questions, Blue Prints, Curriculum fully end to end again after you get the feel that you have prepared. Those would be a good refresher.

These are my views and opinions after I took the test. Again these purely my personal opinions.

All the very best ...

Services Which I feel will become obsolete in AWS

The really good things which I like in Amazon's ecosystem of products and services are "make use of the economies of scale, innovate to reduce cost, feed back cost reduction to products & services and expand the economies of scale". It perfectly makes sense when people call Amazon - Earth's most customer centeric company.

AWS - Amazon Web Services, we can rightly and more appropriately tell they power the ideas and imaginations for start-ups and enterprises than telling they provide scale-able infrastructure on cloud. Ever since the pioneer AWS services like S3 and SQS were launched there were massive updates, features and new services added to AWS stack. There was literally a new announcements from AWS everyday which covered topics like new services, new functionalities, upgrades, cost reductions etc.

With AWS ecosystem's "cycle of economies of scale" constantly spinning and expanding, I feel there are few services which would be superseded by newer services or marking the next advanced functionality (i.e. costlier, bigger, better, faster) as the default at the same price of the regular or currently existing ones.


  1. SimpleDB
    • Really crisp and concrete non-relational service
    • Introduced very early
    • The power and functionality -Virtually unlimited rows was really revolution then
    • Really cost effective
    • I think SimpleDB will be completely superseded by DynamoDB

  2. RDS Single AZ Deployment
    • Single AZ Deployment was easy to start, cheap to setup
    • I think Amazon will continue to innovate and provide / match the cost of Multi-AZ setup at the same price of Single AZ Deployment of RDS
    •  
  3. EC2 Classic
    • This can be put more appropriately like everything inside VPC or everything with VPC
    • As such VPC doesn't cost anything at all
    • There wasn't something as EC2 - Classic at some point in time. The was the name by itself sounds like EC2-Early Times.
    • Amazon has already taken lot of steps to push VPC's power and adoption like creating VPCs like Default-VPC etc.

  4. Magnetic Disks for all Services ( Fully SSD )
    • We can already get a feel of catching up SSD trend it started with DynamoDB then RedShift then general purpose SSD EBS volumes for EC2
    • To be precise, I guess the term Magnetic Disks was newly coined to differentiate with SSD 

  5. m1. family instances in EC2
    • It was once upon a time we had t1.micro, m1.small, m1.medium, m1.large, m1.xlarge and suddenly there were many initials to the instance sizes like R, C, I, G
    • Not to forget the new m3

  6. Intra AZ Data Transfer Charge
    • It was June / July 2011 when Amazon made the ingress transfer free; it was jaw dropping to read that blog post.
    • While many of the other cloud providers doesn't have the concept of AZ ( similar concept ); I guess there would be something around the zero fees for intra-transfer data cost
    • Chances are, it would be surprising even if there was an announcement stating zero transfer cost between Amazon Regions as well
PS : This my personal opinion and there isn't an official release stating these services would become obsolete. This is more like I claiming that "Chocolate Flavored ice-creams are the tastiest among all other flavors". Either of which doesn't have a basis. This is purely my opinion; importantly my personal opinion.

Sunday, April 28, 2013

Things I do on Azure Virtual Machine before starting deployment

Many times Azure Virtual Machine instances have behaved very differently. Once, there was an automatic update which took 4 days non stop to update and in other time the every single setting for the port and firewall was fine, but couldn't do a RD for that instance.


I am writing up this blog post to illustrate what I do every time before start deploying a Virtual Machine Instance.


Things to Do

Turn off Firewall 

Actually each instance has a logical firewall guarding for the port protection, they are operating at the instance level i.e. hypervisor level. In Azure management portal it is under the ENDPOINTS tabs. To enable communication to the instance via a particular port, it is mandatory to open the port here in the portal. We will specify the a friendly name to identify the endpoint, the protocol [ TCP, UDP ] and the port / port range.



This guard is sufficient and effective for the instances. By default the Windows firewall is active and we need add the port or program in firewall's exception to enable access. So with all the basic settings in place, in order to setup and web server using IIS, one must open the ENDPOINT 80 TCP in the Azure Management Portal, then again in Windows Firewall. This essentially means that we are doing the same redundant operation at various levels. 

Having the entire security setting in a single place is good enough. So I turn off the firewall and leave the PORT security responsibility at the hypervisor level.

Turn off IE Enhanced Security

I have written a separate blog post to Solving IE Content Block Alert in Windows Server. It is quite annoying for an administrator to keep adding ever single URL to exception before accessing the site. One can turn off of the IE Enhanced Security setting during the deployment and later turn on the Security Setting again if required

Turn off automatic updates

By default the automatic updates are turned on which means the instance can search & acquire for the Windows Update anytime and then install and restart. This puts the instance in a risk of downtime and not accessibility for a period of time ( actually considerable amount of time/days in my case ).

All administrators are aware of the update process and management however, if the instance is managed or administered by a DevOp this is something to note. Turn off the Windows update, if there is any specific update required, one can always search for a particular update and instance.

It is very risky if the automatic update is turned on and which means if the architecture is deployed on a single instance, during the update process the entire application goes offline during the operation.

Change Private Port to Match Public Ports

There are public port and private ports available in Windows Azure Virtual Machines and this is mainly used in the scenarios of multiple instances / distributed deployments. If the deployment is going to be on single instance, it is better to have the private port same as public port as the default public port as the Windows Azure assigns AFAIK in somewhat like 43421. The port ranges generally lie outside the the organization's firewall limit.

Hope this will be useful for someone who are getting started with Azure Virtual Machines