Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides - Cloud & Networking

65 Articles
article-image-dispelling-myths-hybrid-cloud
Ben Neil
24 May 2017
6 min read
Save for later

Dispelling the myths of hybrid cloud

Ben Neil
24 May 2017
6 min read
The words "vendor lock" worry me more than I'd like to admit. Whether it's having too many virtual machines in ec2, an expensive lambda in Google Functions, or any random offering that I have been using to augment my on-premise Raspberry Pi cluster, it's really something I'vefeared. Over time, I realize it has impacted the way I have spoken about off-premises services. Why? Because I got burned a few times. A few months back I was getting a classic 3 AM call asking to check in on a service that was failing to report back to an on premise Sensu server, and my superstitious mind immediately went to how that third-party service had let my coworkers down. After a quick check, nothing was broken badly, only an unruly agent had hung on an on-premise virtual machine. I’ve had other issues and wanted to help dispel some of the myths around adopting hybrid cloud solutions. So, to those ends, what are some of these myths and are they actually true? It's harder and more expensive to use public cloud offerings Given some of the places I’ve worked, one of my memories was using VMware to spin up new VMs—a process that could take up to ten minutes to get baseline provisioning. This was eventually corrected by using packer to create an almost perfect VM, getting that into VMware images was time consuming, but after boot the only thing left was informing the salt master that a new node had come online.  In this example, I was using those VMs to startup a Scala http4s application that would begin crunching through a mounted drive containing chunks of data. While the on-site solution was fine, there was still a lot of work that had to be done to orchestrate this solution. It worked fine, but I was bothered by the resources that were being taken for my task. No one likes to talk with their coworker about their 75 machine VM cluster that bursts to existence in the middle of the workday and sets off resource alarms. Thus, I began reshaping the application using containers and Hyper.sh, which has lead to some incredible successes (and alarms that aren't as stressful), basically by taking the data (slightly modified), which needed to be crunched and adding that data to s3. Then pushing my single image to Hyper.sh, creating 100 containers, crunching data, removing those containers and finally sending the finalized results to an on premise service—not only was time saved, but the work flow has brought redundancy in data, better auditing and less strain on the on premise solution. So, while you can usually do all the work you need on-site, sometimes leveraging the options that are available from different vendors can create a nice web of redundancy and auditing. Buzzword bingo aside, the solution ended up to be more cost effective than using spot instances in ec2. Managing public and private servers is too taxing I’ll keep this response brief; monitoring is hard, no matter if the service, VM, database or container,is on-site or off. The same can be said for alerting, resource allocation, and cost analysis, but that said, these are all aspects of modern infrastructure that are just par for the course. Letting superstition get the better of you when experimenting with a hybrid solution would be a mistake.  The way I like to think of it is that as long as you have a way into your on-site servers that are locked down to those external nodes you’re all set. If you need to setup more monitoring, go ahead; the slight modification to Nagios or Zappix rules won’t take much coding and the benefit will always be at hand for notifying on-call. The added benefit, depending on the service, which exists off-site is maybe having a different level of resiliency that wasn't accounted for on-site, being more highly available through a provider. For example, sometimes I use Monit to restart a service or depend on systemd/upstart to restart a temperamental service. Using AWS, I can set up alarms that trigger different events to handle a predefined run-book’s, which can handle a failure and saves me from that aforementioned 3am wakeup. Note that both of these edge cases has their own solutions, which aren’t “taxing”—just par for the course. Too many tools not enough adoption You’re not wrong, but if your developers and operators are not embracing at least a rudimentary adoption of these new technologies, you may want to look culturally. People should want to try and reduce cost through these new choices, even if that change is cautious, taking a second look at that s3 bucket or Pivotal cloud foundry app nothing should be immediately discounted. Because taking the time to apply a solution to an off-site resource can often result in an immediate saving in manpower. Think about it for a moment, given whatever internal infrastructure you’re dealing with, the number of people that are around to support that application. Sometimes it's nice to give them a break. To take that learning curve onto yourself and empower your team and wiki of choice to create a different solution to what is currently available to your local infrastructure. Whether its a Friday code jam, or just taking a pain point in a difficult deployment, crafting better ways of dealing with those common difficulties through a hybrid cloud solution can create more options. Which, after all, is what a hybrid cloud is attempting to provide – optionsthat can be used to reduce costs, increase general knowledge and bolster an environment that invites more people to innovate. About the author Ben Neil is a polyglot engineer who has the privilege to fill a lot of critical roles, whether it's dealing with front/backend application development, system administration, integrating devops methodology or writing. He has spent 10+ years creating solutions, services, and full lifecycle automation for medium to large companies.  He is currently focused on Scala, container and unikernel technology following a bleeding edge open source community, which brings the benefits to companies that strive to be at the foremost of technology. He can usually be found either playing dwarf fortress or contributing on Github. 
Read more
  • 0
  • 0
  • 9962

article-image-5-common-misconceptions-about-devops
Hari Vignesh
08 Aug 2017
4 min read
Save for later

5 common misconceptions about DevOps

Hari Vignesh
08 Aug 2017
4 min read
DevOps is a transformative operational concept designed to help development and production teams coordinate operations more effectively. In theory, DevOps is designed to be focused on cultural changes that stimulate collaboration and efficiency, but the focus often ends up being placed on everyday tasks, distracting organizations from the core principles — and values  — that DevOps is built around. This has led to many technology professionals developing misconceptions about DevOps because they have been part of deployments,or know people who have been involved in DevOps plans, who have strayed from the core principles of the movement. Let’s discuss a few of the misconceptions. We need to employ ‘DevOps’ DevOps is not a job title or a specific role. Your organization probably already has Senior Systems guys and Senior Developers who have many of the traits needed to work in the way that DevOps promotes. With a bit of effort and help from outside consultants, mailing lists or conferences, you might easily be able to restructure your business around the principles you propose without employing new people — or losing old ones. Again, there is no such thing as a DevOp person. It is not a job title. Feel free to advertise for people who work with a DevOps mentality, but there are no DevOps job titles. Oftentimes, good people to consider in the role as a bridge between teams are generalists, architects, and Senior Systems Administrators and Developers. Many companies in the past decade have employed a number of specialists — a DNS Administrator is not unheard of. You can still have these roles, but you’ll need some generalists who have a good background in multiple technologies. They should be able to champion the values of simple systems over complex ones, and begin establishing automation and cooperation between teams. Adopting tools makes you DevOps Some who have recently caught wind of the DevOps movement believe they can instantly achieve this nirvana of software delivery simply by following a checklist of tools to implement within their team. Their assumption is, that if they purchase and implement a configuration management tool like Chef, a monitoring service like Librato, or an incident management platform like VictorOps, then they’ve achieved DevOps! But that's not quite true. DevOps requires a cultural shift beyond simply implementing a new lineup of tools. Each department, technical or not, needs to understand the cultural shift behind DevOps. It’s one that emphasizes empathy and better collaboration. It’s more about people. DevOps emphasizes continuous change There’s no way around it — you will need to deal with more change and release tasks when integrating DevOps principles into your operations — the focus is placed heavily on accelerating deployment through development and operations integration, after all. This perception comes out of DevOps’ initial popularity among web app developers. It has been explained that most businesses will not face change that is so frequent, and do not need to worry about continuous change deployment just because they are supporting DevOps. DevOps does not equal “developers managing production” DevOps means development and operations teams working together collaboratively to put the operations requirements about stability, reliability, and performance into the development practices, while at the same time bringing development into the management of the production environment (e.g. by putting them on call, or by leveraging their development skills to help automate key processes). It doesn’t mean a return to the laissez-faire “anything goes” model, where developers have unfettered access to the production environment 24/7 and can change things as and when they like. DevOps eliminates traditional IT roles If, in your DevOps environment, your developers suddenly need to be good system admins, change managers and database analysts, something went wrong. DevOps as a movement that eliminates traditional IT roles will put too much strain on workers. The goal is to break down collaboration barriers, not ask your developers to do everything. Specialized skills play a key role in support effective operations, and traditional roles are valuable in DevOps.  About the Author  Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades. 
Read more
  • 0
  • 0
  • 9231

article-image-openstack-jack-all-trades-master-none-cloud-solution
Richard Gall
06 Oct 2015
4 min read
Save for later

Is OpenStack a ‘Jack of All Trades, Master of None’ Cloud Solution?

Richard Gall
06 Oct 2015
4 min read
OpenStack looks like all things to all people. Casually perusing their website you can see that it emphasises the cloud solution’s high-level features – ‘compute’, ‘storage’ and ‘networking’. Each one of these is aimed at different types of users, from development teams to sysadmins. Its multi-faceted nature makes it difficult to define OpenStack – is it, we might ask, Infrastructure or Software as a Service? And while OpenStack’s scope might look impressive on paper, ticking the box marked ‘innovative’, when it comes actually choosing a cloud platform and developing a relevant strategy, it begins to look like more of a risk. Surely we’d do well to remember the axiom ‘jack of all trades, master of none’? Maybe if you’re living in 2005 – even 2010. But in 2015, if you’re frightened of the innovation that OpenStack offers (and, for those of you really lagging behind, cloud in general) you’re missing the bigger picture. You’re ignoring the fact that true opportunities for innovation and growth don’t simply lie in faster and more powerful tools, but instead in more efficient and integrated ways of working. Yes, this might require us to rethink the ways we understand job roles and how they fit together – but why shouldn’t technology challenge us to be better, rather than just make us incrementally lazier? OpenStack’s multifaceted offering isn’t simply an experiment that answers the somewhat masturbatory question ‘how much can we fit into a single cloud solution’, but is in fact informed (unconsciously, perhaps) by an agile philosophy. What’s interesting about OpenStack – and why it might be said to lead the way when it comes to cloud – is what it makes possible. Small businesses and startups have already realized this (although this is likely through simple necessity as much as strategic decision making considering how much cloud solutions can cost), but it’s likely that larger enterprises will soon be coming to the same conclusion. And why should we be surprised? Is the tiny startup really that different from the fortune 500 corporation? Yes, larger organizations have legacy issues – with both technology and culture – but this is gradually growing out, as we enter a new era where we expect something more dynamic from the tools and platforms we use. Which is where OpenStack fits in – no longer a curiosity or a ‘science project’, it is now the standard to which other cloud platforms are held. That it offers such impressive functionality for free (at a basic level) means that those enterprise solutions that once had a hold over the marketplace will now have to play catch up. It’s likely they’ll be using the innovations of OpenStack as a starting point for their own developments – which they will be hoping keep them ‘cutting-edge’ in the eyes of customers. If OpenStack is going to be the way forward and the wider technical and business communities (whoever they are exactly) are to embrace it with open arms, it means there will need to be a cultural change in how we use it. OpenStack might well be the Jack of all trades and master of all when it comes to cloud, but it nevertheless places an emphasis on users to use it in the ‘correct’ way. That’s not to say that there is a correct way – it’s more about using it strategically and thinking about what you want from OpenStack. CloudPro articulates this in a good way, arguing that OpenStack needs a ‘benevolent dictator’. ‘Are too many cooks spoiling the open-source broth?’ it asks, getting to the central problem with all open-source technologies – the existentially troubling fact that the possibilities are seemingly endless. This point doesn’t mean we need to step away from the collaborative potential of OpenStack; it emphasises that effective collaboration requires an effective and clear strategy. Orchestration is the aim of the game – whether you’re talking software or people. At this year’s OpenStack conference in Vancouver, COO Mark Collier described OpenStack as ‘an agnostic integration engine… one that puts users in the best position for success’. This is a great way to define OpenStack, and positions it as more than just a typical cloud platform. It is its agnosticism that is particularly crucial – it doesn’t take a position on what you do, it rather makes it possible for you to do what you think you need to do. Maybe, then, OpenStack is a jack of all trades that lets you become the master. For more OpenStack tutorials and extra content, visit our dedicated page. Find it here.
Read more
  • 0
  • 0
  • 7728

article-image-3-reasons-why-the-cloud-is-a-terrible-metaphor-and-one-why-it-isnt-2
Sarah
01 Jul 2014
4 min read
Save for later

3 Reasons Why "the Cloud" Is a Terrible Metaphor (and One Why It Isn't)

Sarah
01 Jul 2014
4 min read
I have a lot of feelings about “the cloud” as a metaphor for networked computing. All my indignation comes too late, of course. I’ve been having this rant for a solid four years, and that ship has long since sailed–the cloud is here to stay. As a figurative expression for how we compute these days, it’s proven to have way more sticking power than, say, the “information superhighway”. (Remember that one?) Still, we should always be careful about the ways we use figurative language. Sure, you and I know we’re really talking about odd labyrinths of blinking lights in giant refrigerator buildings. But does your CEO? I could talk a lot about the dangers of abstracting away our understanding of where our data actually is and who has the keys. But I won’t, because I have even better arguments than that. Here are my three reasons why “the cloud” is a terrible metaphor: 1. Clouds are too easy to draw. Anyone can draw a cloud. If you’re really stuck you just draw a sheep and then erase the black bits. That means that you don’t have to have the first clue about things like SaaS/PaaS/IaaS or local persistent storage to include “the cloud” in your Power Point presentation. If you have to give a talk in half an hour about the future of your business, clouds are even easier to draw than Venn Diagrams about morale and productivity. Had wecalled it “ Calabi–Yau Manifold Computing” the world would have saved hundreds of man hours spent in nonsensical meetings.The only thing sparingus from a worse fate is the stalling confusion that comes from trying to combine slide one–“The Cloud”–and slide two–”BlueSky Thinking!”. 2. Hundreds of Victorians died from this metaphor. Well, okay, not exactly. But in the nineteenth century, the Victorians had their own cloud concept–the miasma. The basic tenet was that epidemic illnesses were caused by bad air in places too full of poor people wearing fingerless gloves (for crime). It wasn’t until John Snow pointed to the infrastructure that people worked out where the disease was coming from. Snow mapped the pattern of pipes delivering water to infected areas and demonstrated that germs at one pump were causing the problem. I’m not saying our situation is exactly analogous. I’m just saying if we’re going to do the cloud metaphor again, we’d better be careful of metaphorical cholera. 3. Clouds might actually be alive. Some scientists reckon that the mechanism that lets clouds store and release precipitation is biological in nature. If this understanding becomes widespread, the whole metaphor’s going to change underneath us. Kids in school who’ve managed to convince the teacher to let them watch a DVD instead of doing maths will get edu-tained about it. Then we’re all going to start imagining clouds as moving colonies of tiny little cartoon critters. Do you want to think about that every time you save pictures of your drunken shenanigans to your Dropbox? And one reason why it isn’t a bad metaphor at all: 1. Actually, clouds are complex and fascinating . Quick pop quiz–what’s the difference between cirrus fibrates and cumulonimbus? If you know the answer to that, you’re most likely either a meteorologist, or you’re overpaid to sit at your desk googling the answers to rhetorical questions. In the latter case, you’ll have noticed that the Wikipedia article on clouds is about seventeen thousand words long. That’s a lot of metaphor. Meteorological study helps us to track clouds as they move from one geographic area to another, affecting climate, communications, and social behaviour. Through careful analysis of their movements and composition, we can make all kinds of predictions about how our world will look tomorrow. The important point came when we stopped imagining chariots and thundergods, and started really looking at what lay behind the pictures we’d painted for ourselves.
Read more
  • 0
  • 0
  • 7082

article-image-buying-versus-renting-pros-and-cons-moving-cloud
Kristen Hardwick
01 Jul 2014
5 min read
Save for later

Buying versus Renting: The Pros and Cons of Moving to the Cloud

Kristen Hardwick
01 Jul 2014
5 min read
Convenience One major benefit of the IaaS model is the promise of elasticity to support unforeseen demand. This means that the Cloud vendor will provide the ability to quickly and easily scale the provided resources up or down, based on the actual usage requirements. This typically means that an organization can plan for the ″average″ case instead of the “worst case” of usage, simultaneously saving on costs and preventing outages. Additionally, since the systems provided through cloud vendors are usually virtual machines running on the vendor’s underlying hardware, the process of adding new machines, increasing the disk space, or subscribing to new services is usually just a change through a web UI, instead of a complicated hardware or software acquisition process. This flexibility is an appealing factor because it significantly reduces the waiting time required to support a new capability. However, this automation benefit is sometimes a hindrance to administrators and developers that need to access the low-level configuration settings of certain software. Additionally, since the services are being offered through a virtualized system, continuity in the underlying environment can’t be guaranteed. Some applications – for example, benchmarking tools – may not be suitable for that type of environment. Cost One appealing factor for the transition to the cloud is cost–but in certain situations, using the cloud may not actually be cheaper. Before making a decision, your organization should evaluate the following factors to make sure the transition will be beneficial. One major benefit is the impact on your organization′s budget. If the costs are transitioned to the cloud, they will usually count as operational expenditures, as opposed to capital expenditures. In some situations, this might make a difference when trying to get the budget for the project approved. Additionally, some savings may come in the form of reduced maintenance and licensing fees. These expenditures will be absorbed into the monthly cost, rather than being an upfront requirement. When subscribing to the cloud, you can disable any unnecessary resources ondemand, reducing costs. In the same situation with real hardware, the servers would be required to remain on 24/7 in order to provide the same access benefits. On the other hand, consider the size of the data. Vendors have costs associated with moving data into or out of the cloud, in addition to the charge for storage. In some cases, the data transfer time alone would prohibit the transition. Also, the previously mentioned elasticity benefits that draw some people into the cloud–scaling up automatically to meet unexpected demand–can also have unexpected impact on the monthly bill. These costs are sometimes difficult to predict, and since the cloud computing pricing model is based on usage, it is important to weigh the possibility of an unanticipated hefty bill against an initial hardware investment. Reliability Most cloud vendors typically guarantee service availability or access to customer support. This places that burden on the vendor, as opposed to being assumed by the project′s IT department. Similarly, most cloud vendors provide backup and disaster recovery options either as add-ons or built-in to the main offering. This can be a benefit for smaller projects that have the requirement, but do not have the resources to support two full clusters internally. However, even with these guarantees, vendors still need to perform routine maintenance on their hardware. Some server-side issues will result in virtual machines being disabled or relocated – usually communicated with some advanced notice. In certain cases this will cause interruptions and require manual interaction from the IT team. Privacy All data and services that get transitioned into the cloud will be accessible from anywhere via the web–for better or worse. Using this system, the technique of isolating the hardware onto its own private network or behind a firewall is no longer possible. On the positive side, this means that everyone on the team will be able to work using any Internet-connected device. On the negative side, this means that every precaution needs to be taken so that the data stays safe from prying eyes. For some organizations, the privacy concerns alone are enough to keep projects out of the cloud. Even assuming that the cloud can be made completely secure, stories in the news about data loss and password leakage will continue to project a negative perception of inherent danger. It is important to document all precautions being taken to protect the data and make sure that all affected parties in the organization are comfortable moving to the cloud. Conclusion The decision of whether or not to move into the cloud is an important one for any project or organization. The benefits of flexibility of hardware requirements, built–in support, and general automation must be weighed against the drawbacks of decreased control over the environment and privacy. About the author Kristen Hardwick has been gaining professional experience with software development in parallel computing environments in the private, public, and government sectors since 2007. She has interfaced with several different parallel paradigms, including Grid, Cluster, and Cloud. She started her software development career with Dynetics in Huntsville, AL, and then moved to Baltimore, MD, to work for Dynamics Research Corporation. She now works at Spry where her focus is on designing and developing big data analytics for the Hadoop ecosystem.
Read more
  • 0
  • 0
  • 6419
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
Modal Close icon
Modal Close icon