Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-android-your-mobile-platform-choice
Richard Gall
21 Mar 2016
2 min read
Save for later

Android: Your Mobile Platform of Choice

Richard Gall
21 Mar 2016
2 min read
It’s been a long week of argument and debate, strong words and opinions – and that’s just in the Packt office. But, now the votes have been counted we can announce that Android is the Packt customer’s mobile platform of choice. Across oour website poll and our Twitter poll, Android was the clear winner. Throughout the week, it also proved to be the most popular platform with customers, with sales of our Android eBooks exceeding those for iOS.  As you can see, our Twitter poll, delivered a particularly significant win for Android. Clearly there was a lot of love for Android. But what we really loved about the week was hearing some interesting perspectives from mobile developers around the world. This tweet in particular summed up why we think Android dominated the vote: Fundamentally, it’s all about customization – with Android you have more freedom as a developer, which, for many developers is central to the sheer pleasure of the development experience. Of course, the freedom you get with Android is only a certain type of freedom – and there are, of course trade-offs if you want the openness of such a platform. This article from October 2015 suggested that Android development is ‘30% more expensive than iOS development’ due to the longer amount of time Android projects take – the writers estimate that, on average, you write 40% more code when working with Android over iOS. But with new tools on the horizon likely to make Android development even more efficient (after all, think about what it was like to build for Android back in 2013!), it’s unsurprising that it should prove so popular with many developers. We’re celebrating Android’s win with an additional week of offers – which means you’ve now got another week to pick up our very best Android titles and get ready for a bright and exciting future in the mobile development world!
Read more
  • 0
  • 0
  • 10544

article-image-makers-journey-3d-printing
Travis Ripley
30 Jun 2014
14 min read
Save for later

A Maker's Journey into 3D printing

Travis Ripley
30 Jun 2014
14 min read
If you’ve visited any social media outlets, you’ve probably come across a never-ending list of new words and terms—the Internet of Things, technological dissonance, STEM, open source, tinkerer, maker culture, constructivism, DIY, fabrication, rapid-prototyping, techshop, makerspace, 3D printers, Raspberry Pi, wearables, and more. These terms are typically used to describe a Maker, or they have something to do with Maker culture. Follow along to learn about my particular journey into the Maker culture, specifically in the 3D printing space. The rise of the maker culture Maker culture is on the rise. This is a culture that thrives at the intersection of technology and innovation at the informal, social, and peer-led level. The interactions of skilled people driven to share their knowledge with others, develop new pathways, and create solutions for current problems have built a new community. I am proud to say that I am a Maker-Tinkerer (or that I have some form of motivated ADHD that drives me to engage in engineering-oriented pursuits). My journey started at ground zero while studying 3D design and development. A maker's journey I knew there was more that I could do with my knowledge of rendering the three-dimensional surface of an object. Early on, however, I only thought about extending my knowledge for entertainment purposes, such as video games. I didn’t understand the power of having this knowledge and the way it could help create real-world solutions. Then, I came across an issue of Make Magazine and it changed my mental state overnight—I had to create tangible things. Now that I had the information to send me in the right direction, I needed an outlet. An industry friend mentioned a local Hackerspace, known as Deezmaker, which was holding informational workshops about 3D printing. So, I signed up for an introductory class. I had no clue what I was getting myself into as I crossed that first threshold, but by that evening, I was versed in topics that I thought were far from my mental capabilities. I was hooked. The workshop consisted of part lecture, and part hands-on material. I learned that you couldn't just start using a 3D printer. You actually need to have some basic understanding of the manufacturing process, like understanding that layers of material need to be successfully laid down in order to move on to the next stage in the process. Being the curious, impatient, and overly enthusiastic man-child that I am, this was the most difficult part for me, as I couldn’t wait to engage in this new world. 3D printing Almost two years later, I am fully immersed in the world of 3D printing. I currently have a 3D printer at home (which is almost obsolete, by today’s standards) and I have access to multiple printers at a local techshop/makerspace known as Makerplace here in San Diego, Ca. I use this technology regularly, since I have changed directions in my career as a 3D artist towards Manufacturing Engineering and Rapid Prototyping. I am currently attending a Machine Technology/Engineering program at San Diego City College; (for more info on the best Machining program in the country visit http://www.JCbollinger.com). The benefit for me using 3D printers is rapidly producing iterations of prototypes for my clientele, since most people feel more reassured in the process if they have tangible and solid objects and are more likely to trust you as a designer. I feel that having access to this also helps me complete more jobs successfully given that turnaround times for updates can be as little as a few hours, rather than days or weeks (depending on the size/scale). Currently I have a few reoccurring clients that want updates often, and by showing them my progress, the iterations are fewer and I can move onto the next project with no hesitation given how we can successfully see design updates rapidly and minimize the flaws and failures. I produce prototypes for all industries: toys, robotics, vehicles, and so on. Think of it as producing solutions, and how you can either make something better or simpler. Entertaining the idea of a challenge and solving these challenges has benefits as with each new design job you have all these tangible objects to look at and examine. As a hobbyist, the technology has made it easy to produce new or even obsolete items. For example, I love Transformers, but you know how plastic does two things very well: it breaks and gets lost. I came across a forum where guys were distributing the programs for the arm extrusions that break (no one likes gluing), so I printed the parts that had been missing for decades, rebuilt the armature that had for so long been displaced, and then like magic I felt like I was six years old again with a perfectly working Transformer. Here are a few things that I've learned along the way: 3D printing is also known as Additive Manufacturing. It is the process of producing three-dimensional objects in which successive layers of varied material are extruded under computer-controlled equipment that is fed information from 3D models. These models are derived from a data source that processes the information into machine language. The plastic extrusion technology that is now becoming slowly more popular is known as Fused Deposition Modeling (FDM). This process was developed in the early 1990s for the application of job production, mass production, rapid prototyping, product development, and distributed manufacturing. The principle of FDM is that material is laid down in layers. There are many other processes such as Selective Heat Sintering (SHS), Selective Laser Sintering (SLS), Stereolithography (SLA), and Plaster-Based 3D Printing (PP) to name a few. We will keep it simple here and go over the FDM process for now, as most of the printers at the hobbyist level use this process. The FDM process significantly affected roles within the production and manufacturing industries, as wearing multiple hats as an engineer, designer, and operator and as growth made the technology more affordable to an array of industrial fields. In contrast, CNC Machining, which is a Subtractive Manufacturing process, has been incorporated naturally to work together in this development. The influence of this technology in the industrial and manufacturing industries created exposure to new methods of production at exponential rates, for example Automation. For the home-use and hobbyist market, the 3D printers produced by the open source/open hardware initiative can be stemmed directly or indirectly from the RepRap.org project, which is a free to low-cost desktop 3D printer that is self-replicating. That being said, you can thank them for starting this revolution. By getting involved in this community you are benefiting everyone by spreading the spark that will continue to create new developments in manufacturing and consumer technology. The FDM process can be done with a multitude of materials; the two most popular options at this time are PLA (Polylactic acid) and ABS (Acrylonitrile butadiene styrene). Both PLA and ABS have pros and cons, depending upon your model structure. The future use of the print and client requests and understanding the fundamental differences between the two can help you determine your choice of one over the other, or in case of owning a printer with two extruders, how they can be combined. In some cases, PVA (Polyvinyl Acetate) is also used as support material (in the case of two extruders) unlike PLA or ABS, which if used as support material will require cleanup when finishing a print. PVA is water soluble, so you can soak your print in warm water and the support structures will dissolve away. PLA (Polylactic Acid) is a strong biodegradable plastic that is derived from renewable resources: cornstarch and sugarcane. It is more resistant to UV rays than ABS (so you will not see fading with your prints). Also, it sticks better than any other material to the surface of your hotplate (minimal warping), which is a huge advantage. It prints at -180* C, and it can create an ooze, and if your nozzle is loaded it will drip, which also means that leaving a print in your car on a hot day may cause damage. ABS (Acrylonitrile butadiene styrene) is stronger than PLA, but is non-biodegradable; it is a synthetic monomer produced from propylene and ammonia. This means it has more rigidity than PLA, but is also more flexible. It is a colorfast material (which means it will hold its color for years). It prints at -220*C, and is amorphous and therefore has no true melting point, so a heated bed is needed as warping can and will occur (usually because the bed is not hot enough—at least 80*C —or the Z axis is not calibrated correctly). Printer options For the hobbyist maker, there are a few 3D printer options to consider. Depending upon your skill level, your needs, budget and commitments, there is a printer out there for you. The least expensive, smallest, and most straightforward printer available on the market is Printrbot Simple Maker’s 3D Printer. Retailing at $349.99, this printer comes in a kit that includes the bare necessities you need to get started. It is capable of printing a 4” cube. You can also purchase it already assembled for a little extra. The kit and PLA filament are available at www.makershed.com. The 3D printer I started on, personally own, and recommend is the Afina H480 3D printer. Retailing at $1299.99, this printer provides the easiest setup right out of the box, it’s fully assembled, comes with a heated platform for the aid of adhesion and for less chance of warping, and can print up to a 5” cube. It also comes loaded with its own native 3D software, where you can manipulate your .STL files. It has an automated utility to calibrate the printer’s build platform with the printhead, and also automatically generates any support setup material and the “raft”, which is the base support for your prints. There is so much more to it, but as I said I recommend this for beginners, and it is also available through www.makershed.com. For the person who wants to print, and is at the hobbyist and semi-professional level, consider the next generation in 3D printing, the MAKERBOT Replicator. It is quick and efficient. Retailing at $2899.00, this machine has an extremely high layer resolution, LCD display, and if you run out of filament (ABS/PLA), there is no need to start over; this machine will alert you via computer or smartphone that a replacement is needed. There are many types of 3D printers available, with options including open source, open hardware, filament types, delta style mechanics, single/double extruders, and the list goes on. My main suggestion is to try before you buy, either at a local hackerspace or a local Makerfaire. It’s a worthwhile investment that pays for itself. Choosing your tools Before you begin, it's also important to choose your design tools. There are many great open source tools to choose from. Here are some of my favorites. When it comes to design tools, there is a multitude of cost effective and free tools out there to get you started. First off, the 3D printing process has a required “tool-chain” that must be followed in order to complete the process, roughly broken down into three parts: CAD (Computer Aided Design): Tools used to design 3D parts for printing. There are very few interchangeable CAD file formats that are sometimes referred to as parametric files. The most widely used interchangeable mesh file format is .STL (Stereolithography). This format is the most important as it used by CAM tools. CAM (Computer Aided Manufacturing): Tools handling the intermediate step of translating CAD files into a machine-friendly format. Firmware for electronics: This is what runs the onboard electronics of the printer, and is the closest to actual programming; a process known as cross compiling. Here are my best picks for each category, known as FLOSS (free/libre/open source software). FLOSS CAD tools, for example OpenSCAD, FreeCAD, and HeeksCAD for the most part create these parametric files that usually represent parts or assemblies in terms of CSG (Constructive Solid Geometry) which basically represent a tree of Boolean operations performed on primitive shapes such as cubes, spheres, cylinders, and pyramids. These are modified numerically and with great precision and the geometry is a mathematical representation of such, no matter how much you zoom in or out. Another category of CAD tool that represents the parts as 3D polygon mesh is for the most part used for special effects in movies or video games (CG). They are also a little more user friendly, and examples would be Autodesk Maya and Autodesk 3ds Max (these choices are subscription/retail-based). But there are also open source and free versions of this tool such as Autodesk 123D, Google Sketchup, and Blender; I suggest the latter options, since they are free, user friendly, and they are much easier to learn since their options are narrowed down strictly to producing 3D meshes. If you need more precision you should look at OpenSCAD (my favorite), as it was created directly for making physical objects rather than game design or animation. OpenSCAD is easy to learn, with a simple interface, it is powerful and cross-platform, and there are many examples you can use along with strong community support. Next, you’ll need to convert your 3D masterpiece (.stl) into a machine friendly format known as G-Code. This process is also known as “slicing”. You’re going to need some CAM software to produce the “tool paths,” which is the next stop in the tool chain. Most of the slicing software available is open source. Some examples are Slic3r (the most popular, with an ease of use recommended for beginners), Skeinforge (dated, but still one of the best), Cura, and MatterSlice. There is also great closed source slicing software out there. One in particular is KISSlicer, which is a pro version that supports multi-extruder printing. The next stop after slicing is using software known as: A G-Code interpreter, which breaks down each line of the code into electronic signals. A G-Code sender, which sends the signals to the motors on the printer to tell them how to move. This software is usually directly linked to an EMC (Electronic Machine Controller), which controls the printer directly. It can also be linked to an integrated hardware interface that has a G-Code interpreter built in, which loads the G-Code directly from a memory card (SD card/USB). The last stop is the firmware, which controls the electronics onboard the printer. For the most part, the CPUs that control these machines are simple microcontrollers that are usually Arduino-based, and they are compiled using the Arduino IDE. This process may sound time consuming, but once you go through the tool chain process a few times, it becomes second nature, just like driving a manual transmission in a car. Where to go from here? When I finished my first hackerspace workshop, I had been assimilated into a culture that I was not only benefiting from personally, but a culture that I could share my knowledge with and contribute to. I have received far more in my journey as a maker than any previous endeavor. To anyone who is curious, and mechanically inclined (or not), who believes they have an answer to a solution, I challenge you. I challenge you to make the leap into this culture—join a hackerspace, attend a makerfaire, and enrich your life and the lives of others. About the Author Travis Ripley is a designer/developer. He enjoys developing products with composites, woods, steel, and aluminum, and has been immersed in the Maker community for over two years. He also teaches game development at the University of California, Los Angeles. He can be found @travezripley.
Read more
  • 0
  • 0
  • 10543

article-image-angularjs-love-affair-decade
Richard Gall
05 Feb 2016
6 min read
Save for later

AngularJS: The Love Affair of the Decade

Richard Gall
05 Feb 2016
6 min read
AngularJS stands at the apex of the way we think about web development today. Even as we look ahead to Angular 2.0, the framework serves as a useful starting point for thinking about the formation of contemporary expectations about what a web developer actually does and the products and services they create. Notably (for me at least) Angular is closely tied up with Packt’s development over the past few years. It’s had an impact on our strategic focus, forcing us to think about our customers in new ways. Let’s think back to the world before AngularJS. This was back in the days when Backbone.js meant something, when Knockout was doing the rounds. As this article from October has it, AngularJS effectively took advantage of a world suffering from ‘Framework fatigue’. It’s as if there was a ‘framework bubble’, and it’s only when that bubble burst that the way forward becomes clearer. This was a period of experimentation and exploration; improvement and efficiency were paramount, but a symptom of this was the way in which trends – some might say fads – took hold of the collective imagination. This period was a ‘framework’ bubble which, I’d suggest, prefigures the startup bubble, a period in which we’re living today. Developers were looking for new ways of doing things; they wanted to be more efficient, their projects more scalable, fast, and robust. All those words that are attached to development (in both senses of the word) took on particular urgency. As you might expect, this unbelievable pace of growth and change was like catnip for Packt. This insatiable desire for new tools was something that we could tapped into, delivering information and learning materials on even the most niche new tools. It was exciting. But it couldn’t last. It was thanks to AngularJS that this changed. Ironically, if AngularJS burst the framework bubble, ending what seemed like an endless stream of potential topics to cover, it also supplied us with some of our most popular titles. AngularJS Web Application Development Cookbook, for example, was a huge success. Written by Matt Frisbie, it helped us to forge a stronger relationship with the AngularJS world. It was weird – its success also brought an end to a very exciting period of growth, where Packt was able to reach out to new customers, small communities that other publishers could not. But we had to grow up. AngularJS was like a friend’s wedding; it made us realise that we needed to become more mature, more stable. But why, we should ask, was AngularJS so popular? Everyone is likely to have their own different story, their own experience of adopting AngularJS, and that, perhaps, is precisely the point. Brian Rinaldi, in the piece to which I refer above, notes a couple of things that made Angular a framework to which people could commit. Its ties with Google, for example gave it a mark of authority and reliability, while its ability to integrate with other frameworks means developers still have the flexibility to use the tools they want to while still having a single place to which they could return. Brian writes: The point is, all these integrations not only made the choice of Angular easier, but make leaving harder. It’s no longer just about the code I write, but Angular is tied into my entire development experience. Experience is fundamental here. If the framework bubble was all about different ways of doing the same thing faster and more effectively, today the reverse is true. Developers want to work in one way, but to be able to do lots of things. It’s a change in priorities; the focus of the modern web developer in 2016 has changed. The challenges are different, as mobile devices, SPAs, cloud, personalization, have become fundamental issues for web developers to reckon with. Good web developers looks beyond the immediacy of their project, and need to think carefully about users and about how they can deliver a great product or service. That’s what we’ve found at Packt. The challenges faced by the customers we serve are no longer quite so transparent or simple. If, just a few years ago, we relied upon the simple need to access information about a new framework, today the situation is more nuanced. Many of the challenges are due to changing user behaviour, a fragmentation of needs and contexts. For example, maybe you want to learn responsive web design? Or need to build a mobile app? Of course, these problems haven’t just appeared in the last 12 months, but they are no longer additional extras, but central to success. It’s these problems that have had a part in causing the startup bubble – businesses solving (or, if they’re really good, disrupting) customer needs with software. A framework such as React might be seen as challenging AngularJS. But despite its dedicated, almost evangelical core of support, it’s nevertheless relatively small. And it would also be wrong to see the emergence of React (alongside other tools, including Meteor), as a return to the heady days of the framework bubble. Instead it has grown out of a world inculcated by Angular – it is, remember, a tool designed to build a very specific type of application. The virtual DOM, after all, is an innovation that helps deliver a truly immediate and fast user experience. The very thing that makes React great is why it won’t supplant Angular – why would it even want to? If you do one thing, and do it well, you’re adding value that people couldn’t get from anywhere else. Fear of obsolescence – that’s the world in which AngularJS entered, and the world in which Packt grew. But today, the greatest fear isn’t so much obsolescence, it’s ‘Am I doing the right thing for my users? Are my customers going to like this website – this new app?’ So, as we await Angular 2.0, don’t forget what AngularJS does for you – don’t forget the development experience and don’t forget to think about your users. Packt will be ready when you want to learn 2.0 – but we’ll also still have the insights and guidance you need to do something new with AngularJS. Progress and development isn’t linear; it’s never a straight line. So don’t be scared to explore, rediscover what works. It’s not always about what’s new, it’s about what’s right for you. Save up to 70% on some of our very best web development titles from 11th to 17th April. From Flask to React to Angular 2, it's the perfect opportunity to push your web development skills forward. Find them here.
Read more
  • 0
  • 0
  • 10499

article-image-webgl-games
Alvin Ourrad
05 Mar 2015
5 min read
Save for later

WebGL in Games

Alvin Ourrad
05 Mar 2015
5 min read
In this post I am not going to show you any game engine, nor framework, nor library. This post is a more general write-up that aims to give you a more general overview of the technology that powers some of these frameworks : WebGL. Introduction Back in the days, in 2011, 3D in the browser was not really a thing outside of the realm of Flash, and the websites didn't make much use of the canvas element like they do today. During that year, the Khronos Group started an initiative called WebGL. This project was about creating an implementation of OpenGL ES 2.0 in a royalty free, standard, and cross browser API. Even though the canvas element can only draw 2d primitives, it actually is possible to render 3D graphics at a decent speed with this element.  By making a clever use of perspective and using a lot of optimizations, MrDoob with THREE.js managed to create a 3D canvas renderer, which quite frankly offers stunning results as you can see here and there. But, even though canvas can do the job, its speed and level of hardware-acceleration is nothing compared to the one WebGL benefits from, especially when you take into account the browsers on lower-end devices such as our mobile phones. Fast-forward in time, when Apple officially announced the support of WebGL for mobile Safari in IOS 8, the main goal was reached, since most of the recent browsers were able to use this 3D technology natively. Can I have 3D ? It's very likely that you can now, although there are still some graphics cards that were not made to support WebGL, but the global support is very good now. If you are interested in learning how to make 3D graphics in the browser, I recommend you do some research about a library called THREE.js. This library has been around for a while and is usually what most people choose to get started with, as this library is just a 3D library and nothing more. If you want to interact with the mouse, or create a bowling game, you will have to use some additional plugins and/or libraries. 3D in the gaming landscape As the support and the awareness around WebGL started rising, some entrepreneurs and companies saw it as a way to create a business or wanted to take part in this 3D adventure. As a result, several products are available to you if you want to delve into 3D gaming. Playcanvas This company likes saying that they re-created "Unity in the browser", which is not far from the truth really. Their in-browser editor is very complete, and mimics the entity-component system that exists in Unity. However, I think the best thing they have created among their products is their real-time collaboration feature. It allows you to work on a project with a team and instantly updates the editor and the visuals for everyone currently viewing it. The whole engine was also open sourced a few months ago, which has given us beautiful demos like this one:  http://codepen.io/playcanvas/pen/ctxoD Feel free to check out their website and give their editor a try:  https://playcanvas.com Goo technology Goo technology is an environment that encompasses a 3D engine, the Goo engine, an editor and a development environment. Goo create is also a very nicely designed 3D editor in the browser. What I really like about Goo is their cartoony mascot, "Goon" that you can see in a lot of their demos and branding, which adds a lot of fun and humanity to them. Have fun watching this little dude in his adventures and learn more about the company in these links:  http://www.goocreate.com Babylonjs I wasn't sure if this one was worth including, Babylon is a contestant to THREE.js created by Microsoft that doesn't want to be "just a rendering engine," but wants to add some useful components available out-of-the-box such as camera controls, a physics engine, and some audio capabilities. Babylon is relatively new and definitely not as battle-tested as THREE.js, but they created a set of tools that help you get started with it that I like, namely the playground and the shader editor. 2D ? Yes, there is a major point that I haven't mentioned yet. WebGL has been used across more 2D games that you might imagine. Yes, there is no reason why 2D games shouldn’t have this level of hardware-acceleration. The first games that used WebGL for their 2D needs were Rovio and ZeptoLabs for the ports of their respective multi-million-dollar hits that are Angry Birds and Cut the Rope to JavaScript. When pixi.js came out, a lot of people started using it for their games. The major HTML5 game framework, Phaser is also using it. Play ! This is the end of this post, I hope you enjoyed it and that you want to get started with these technologies. There is no time to waste -- it's all in your hands. About the author Alvin Ourrad is a web developer fond of the web and the power of open standards. A lover of open source, he likes experimenting with interactivity in the browser. He currently works as an HTML5 game developer.
Read more
  • 0
  • 0
  • 10280

article-image-freecad-open-source-design-bleeding-edge
Michael Ang
31 Dec 2014
5 min read
Save for later

FreeCAD: Open Source Design on the Bleeding Edge

Michael Ang
31 Dec 2014
5 min read
Are you looking for software for designing physical objects for 3D printing or physical construction? Computer-aided design (CAD) software is used extensively in engineering when designing objects that will be physically constructed. Programs such as Blender or SketchUp can be used to design models for 3D printing but there’s a catch: it’s quite possible to design models that look great onscreen but don’t meet the "solid object" requirements of 3D printing. Since CAD programs are targeted at building real-world objects, they can be a better fit for designing things that will exist not just on the screen but in the physical world. D-printable Servo controlled Silly-String Trigger by sliptonic FreeCAD distinguishes itself by being open source, cross-platform, and designed for parametric modeling. Anyone is free to download or modify FreeCAD, and it works on Windows, Mac, and Linux. With parametric modeling, it’s possible to go back and change parameters in your design and have the rest of your design update. For example, if you design a project box to hold your electronics project and decide it needs to be wider, you could change the width parameter and the box would automatically update. FreeCAD allows you to design using its visual interface and also offers complete control via Python scripting. Changing the size of a hole by changing a parameter I recommend Bram De Vries’ FreeCAD tutorials on YouTube to help you get started with FreeCAD. The FreeCAD website has links to download the software and a getting started guide. FreeCAD is under heavy development (by a small group of individuals) so expect to encounter a little strangeness from time to time, and save often! If you’re used to using software developed by a large and well-compensated engineering team you may be surprised that certain features are missing, but on the other hand it’s really quite amazing how much FreeCAD offers in software that is truly free. You might find a few gaping holes in functionality, but you also won’t find any features that are locked out until you go "Premium". If you didn’t think I was geeky enough for loving FreeCAD, let me tell you my favorite feature: everything is scriptable using Python. FreeCAD is primarily written in Python and you have access to a live Python console while the program is running (View->Views->Python console) that you can use to interactively write code and immediately see the results. Scripting in FreeCAD isn’t through some limited programming interface, or with a limited programming language: you have access to pretty much everything inside FreeCAD using standard Python code. You can script repetitive tasks in the UI, generate new parts from scratch, or even add whole new "workbenches" that appear alongside the built-in features in the FreeCAD UI. Creating a simple part interactively with Python There are many example macros to try. One of my favorites allows you to generate an airfoil shape from online airfoil profiles. My own Polygon Construction Kit (Polycon) is built inside FreeCAD. The basic idea of Polycon is to convert a simple polygon model into a physical object by creating a set of 3D-printed connectors that can be used to reconstruct the polygon in the real world. The process involves iterating over the 3D model and generating a connector for each vertex of the polygon. Then each connector needs to be exported as an STL file for the 3D printing software. By implementing Polycon as a FreeCAD module I was able to leverage a huge amount of functionality related to loading the 3D model, generating the connector shapes, and exporting the files for printing. FreeCAD’s UI makes it easy to see how the connectors look and make adjustments to each one as necessary. Then I can export all the connectors as well-organized STL files, all by pressing one button! Doing this manually instead of in code could literally take hundreds of hours, even for a simple model. FreeCAD is developed by a small group of people and is still in the "alpha" stage, but it has the potential to become a very important tool in the open source ecosystem. FreeCAD fills the need for an open source CAD tool the same way that Blender and GIMP do for 3D graphics and image editing. Another open source CAD tool to check out is OpenSCAD. This tool lets you design solid 3D objects (the kind we like to print!) using a simple programming language. OpenSCAD is a great program–its simple syntax and interface is a great way to start designing solid objects using code and thinking in "X-Y-Z". My first implementation of Polycon used OpenSCAD, but I eventually switched over to FreeCAD since it offers the ability to analyze shapes as well as create them, and Python is much more powerful than OpenSCAD’s programming language. If you’re building 3D models to be printed or are just interested in trying out computer-aided design, FreeCAD is worth a look. Commercial offerings are likely going to be more polished and reliable, but FreeCAD’s parametric modeling, scriptability, and cross-platform support in an open source package are quite impressive. It’s a great tool for designing objects to be built in the real world. About the Author Michael Ang is a Berlin-based artist and engineer working at the intersection of art, engineering, and the natural world. His latest project is the Polygon Construction Kit, a toolkit used to bridge the virtual and physical realms by constructing real-world objects from simple 3D models. He is one of the organizers of Art Hack Day, an event for hackers whose medium is tech and artists whose medium is technology.
Read more
  • 0
  • 0
  • 10228

article-image-7-ways-2014-changed-front-end-development
Sarah C
09 Jan 2015
4 min read
Save for later

Angular, Responsive, and MEAN - how 2014 changed front-end development

Sarah C
09 Jan 2015
4 min read
Happy New Year, Web Devians. We've just finished off quite a year for web technologies, haven't we? 2014 was categorised by a growth in diversity – nowadays there’s an embarrassment of riches when it comes to making the most of CSS and JavaScript. We’re firmly past the days when jQuery was considered fancy. This year it wasn’t a question of whether we were using a framework – instead we’ve mostly been tearing our hair out trying to decide which one fits where. But whether you’re pinning your colours to Backbone or Angular, Node or PHP, there have been some clear trends in how the web is changing. Here’s Packt’s countdown of the top seven ways web tech has grown this year. If you weren’t thinking about these things in 2014, then it might be time to get up to speed before 2015 overtakes you! Angular We saw it coming in 2013, but in 2014 Angular basically ate everything. It’s the go-to framework for a subset of JavaScript projects that we’re going to refer to here as [“All Projects Ever”].  This is a sign of the times for where front-end development is right now. The single-page web application is now the heart of the new internet, which is deep, reactive, and aware. 2014 may go down as the year we officially moved the party to the client side. Responsive Web Design Here at Packt we’ve seen a big increase in people thinking about responsive design right from the beginning of their projects, and no wonder. In 2014 mobile devices crossed the line and outstripped traditional computers as the main way in which people browse the web. We glimpse the web now through many screens in a digital hall of mirrors. The sites we built in 2014 had to be equally accessible whether users were on IE8 at the library, or tweeting from their Android while base jumping. The MEAN stack 2014 put to rest for good the idea that JavaScript was a minor-league language that just couldn’t hack it on the back end. In the last twelve months MEAN development has shown us just how streamlined and powerful Node can be when harnessed with front-end JavaScript and JSON data storage. 2014 was for MongoDB, Express, Angular and Node had their break-out year this year as the hottest band in web dev. Data visualisation Did you know that all the knowledge available in the whole world before 1800 compresses to fewer bytes than Twitter streams in a minute? Actually, I just made that up. But it is true that we are generating and storing data at an increasingly hectic rate. When it comes to making visual sense of it, web tech has had a big role to play. D3 continued to hold its own as one of the most important tools in web development this year. We’ve all been thinking visually about charts and infographics. Which brings us to… Flat design The internet we built in 2014 was flat and stripy, and it’s wonderful.  Google’s unveiling of Material Design at this year’s I/O conference cemented the trend we’d all been seeing. Simple vector graphics, CSS animations and a mature code-based approach to visuals has swept the scene. There are naysayers of course (and genuine questions about accessibility, which we’ll be blogging about next year) but overall this aesthetic feels mature. Like moments in traditional architecture, 2014 felt like a year in which we cemented a recognisable design era. Testing and build tools Yes, we know. The least fun part of JavaScript – testing it and building, rebuilding, rebuilding. Chances are though that if you were involved in any large-scale web development this year you’ve now got a truly impressive Bat-utility belt of tools to work with. From Yeoman, to Gulp or Grunt, to Jasmine, to PhantomJS, updates have made everything a little more sophisticated. Cross-platform hybrid apps For decades we’ve thought about HTML/CSS/JavaScript as browser languages. With mobile technology though, we’ve broadened thinking and bit by bit JS has leaked out of the browser. When you think about it, our phones and tablets are full of little browser-like mutants, gleefully playing with servers and streaming data while downplaying the fact that their grandparents were Netscape and IE6. This year the number of hybrid mobile apps – and their level of sophistication – has exploded. We woke up to the fact that going online on mobile devices can be repackaged in all kinds of ways while still using web-tech to do all the heavy lifting. All in all, it’s been an exciting year. Happy New Year, and here’s to our new adventures in 2015!
Read more
  • 0
  • 0
  • 10200
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-modern-go-development
Xavier Bruhiere
06 Nov 2015
8 min read
Save for later

Modern Go Development

Xavier Bruhiere
06 Nov 2015
8 min read
  The Go language indisputably generates lot of discussions. Bjarne Stroustrup famously said: There are only two kinds of languages: the ones people complain about and the ones nobody uses. Many developers indeed share their usage retrospectives and the flaws they came to hate. No generics, no official tool for vendoring, built-in methods break the rules Go creators want us to endorse. The language ships with a bunch of principals and a strong philosophy. Yet, The Go Gopher is making its way through companies. AWS is releasing its Go SDK, Hashicorp's tools are written in Go, and so are serious databases like InfluxDB or Cockroach. The language doesn't fit everywhere, but its concurrency model, its cross-platform binary format, or its lightning speed are powerful features. For the curious reader, Texlution digs deeper on Why Golang is doomed to succeed. It is also intended to be simple. However, one should gain a clear understanding of the language's conventions and data structures before producing efficient code. In this post, we will carefully setup a Go project to introduce a robust starting point for further development. Tooling Let's kickoff the work with some standard Go project layout. New toys in town try to rethink the way they are organized, but I like to keep things simple as long as it just works. Assuming familiarity with the Go installation and GOPATH mess, we can focus on the code's root directory. ➜ code tree -L 2 . ├── CONTRIBUTING.md ├── CHANGELOG.md ├── Gomfile ├── LICENCE ├── main.go ├── main_test.go ├── Makefile ├── shippable.yml ├── README.md ├── _bin │   ├── gocov │   ├── golint │   ├── gom │   └── gopm └── _vendor    ├── bin    ├── pkg    └── src To begin with, README.md, LICENCE and CONTRIBUTING.md are usual important documents for any code expected to be shared or used. Especially with open source, we should care about and clearly state what the project does, how it works and how one can (and cannot) use it. Writing a Changelog is also a smart step in that direction. Package manager The package manager is certainly a huge matter of discussion among developers. The community was left to build upon the go get tool and many solutions arisen to bring deterministic builds to Go code. While most of them are good enough tools, Godep is the most widely used, but Gom is my personal favorite: Simplicity with explicit declaration and tags # Gomfile gom 'github.com/gin-gonic/gin', :commit => '1a7ab6e4d5fdc72d6df30ef562102ae6e0d18518' gom 'github.com/ogier/pflag', :commit => '2e6f5f3f0c40ab9cb459742296f6a2aaab1fd5dc' Dependency groups # Gomfile (continuation) group :test do # testing libraries gom 'github.com/franela/goblin', :commit => 'd65fe1fe6c54572d261d9a4758b6a18d054c0a2b' gom 'github.com/onsi/gomega', :commit => 'd6c945f9fdbf6cad99e85b0feff591caa268e0db' gom 'github.com/drewolson/testflight', :commit => '20e3ff4aa0f667e16847af315343faa39194274a' # testing tools gom 'golang.org/x/tools/cmd/cover' gom 'github.com/axw/gocov', :commit => '3b045e0eb61013ff134e6752184febc47d119f3a' gom 'github.com/mattn/goveralls', :commit => '263d30e59af990c5f3316aa3befde265d0d43070' gom 'github.com/golang/lint/golint', :commit => '22a5e1f457a119ccb8fdca5bf521fe41529ed005' gom 'golang.org/x/tools/cmd/vet' end Self-contained project # install gom binary go get github.com/mattn/gom # ... write Gomfile ... # install production and development dependencies in `./_vendor` gom -test install We just declared and bundled full requirements under its root directory. This approach plays nicely with trendy containers. # we don't even need Go to be installed # install tooling in ./_bin mkdir _bin && export PATH=$PATH:$PWD/_bin docker run --rm -it --volume $PWD/_bin:/go/bin golang go get -u -t github.com/mattn/gom # asssuming the same Gomfile as above docker run --rm -it --volume $PWD/_bin:/go/bin --volume $PWD:/app -w /app golang gom -test install An application can quickly rely on a significant number of external resources. Dependency managers like Gom offers a simple workflow to avoid breaking-change pitfalls - a widespread curse in our fast paced industry. Helpers The ambitious developer in love with productivity can complete its toolbox with powerful editor settings, an automatic fix, a Go repl, a debugger, and so on. Despite being young, the language comes with a growing set of tools helping developers to produce healthy codebase. Code With basic foundations in place, let's develop a micro server powered by Gin, an impressive web framework I had great experience with. The code below highlights commonly best practices one can use as a starter. // {{ Licence informations }} // {{ build tags }} // Package {{ pkg }} does ... // // More specifically it ... package main import ( // built-in packages "log" "net/http" // third-party packages "github.com/gin-gonic/gin" flag "github.com/ogier/pflag" // project packages placeholder ) // Options stores cli flags type Options struct { // Addr is the server's binding address Addr string } // Hello greets incoming requests // Because exported identifiers appear in godoc, they should be documented correctly func Hello(c *gin.Context) { // follow HTTP REST good practices with an adequate http code and json-formatted response c.JSON(http.StatusOK, gin.H{ "hello": "world" }) } // Handler maps endpoints with callbacks func Handler() *gin.Engine { // gin default instance provides logging and crashing recovery middlewares router := gin.Default() router.GET("/greeting", Hello) return router } func main() { // parse command line flags opts := Options{} flag.StringVar(&opts.Addr, "addr", ":8000", "server address") flag.Parse() if err := Handler().Run(opts.Addr); err != nil { // exit with a message and a code status 1 on errors log.Fatalf("error running server: %vn", err) } } We're going to take a closer look at two important parts this snippet is missing : error handling and interfaces' benefits. Errors One tool we could have mentioned above is errcheck, which checks that you checked errors. While it sometimes produces cluttered code, Go error handling strategy enforces rigorous development : When justified, use errors.New("message") to provide a helpful output. If one needs custom arguments to produce a sophisticated message, use fmt.Errorf("math: square root of negative number %g", f) For even more specific errors, let's create new ones: type CustomError struct { arg int prob string } // Usage: return -1, &CustomError{arg, "can't work with it"} func (e *CustomError) Error() string { return fmt.Sprintf("%d - %s", e.arg, e.prob) } Interfaces Interfaces in Go unlock many patterns. In the gold age of components, we can leverage them for API composition and proper testing. The following example defines a Project structure with a Database attribute. type Database interface { Write(string, string) error Read(string) (string, error) } type Project Structure { db Database } func main() { db := backend.MySQL() project := &Project{ db: db } } Project doesn't care of the underlying implementation of the db object it receives, as long as this object implements Database interface (i.e. implements read and write signatures). Meaning, given a clear contract between components, one can switch Mysql and Postgre backends without modifying the parent object. Apart from this separation of concern, we can mock a Database and inject it to avoid heavy integration tests. Hopefully this tiny, carefully written snippet should not hide too much horrors and we're going to build it with confidence. Build We didn't join a Test Driven Development style but let's catch up with some unit tests. Go provides a full-featured testing package but we are going to level up the game thanks to a complementary combo. Goblin is a thin framework featuring Behavior-driven development close to the awesome Mocha for node.js. It also features an integration with Gomega, which brings us fluent assertions. Finally testflight takes care of managing the HTTP server for pseudo-integration tests. // main_test.go package main import ( "testing" . "github.com/franela/goblin" . "github.com/onsi/gomega" "github.com/drewolson/testflight" ) func TestServer(t *testing.T) { g := Goblin(t) //special hook for gomega RegisterFailHandler(func(m string, _ ...int) { g.Fail(m) }) g.Describe("ping handler", func() { g.It("should return ok status", func() { testflight.WithServer(Handler(), func( r*testflight.Requester) { res := r.Get("/greeting") Expect(res.StatusCode).To(Equal(200)) }) }) }) } This combination allows readable tests to produce readable output. Given the crowd of developers who scan tests to understand new code, we added an interesting value to the project. It would certainly attract even more kudos with a green test-suite. The following pipeline of commands try to validate a clean, bug-free, code smell-free, future-proof and coffee-maker code. # lint the whole project package golint ./... # run tests and produce a cover report gom test -covermode=count -coverprofile=c.out # make this report human-readable gocov convert c.out | gocov report # push the reslut to https://coveralls.io/ goveralls -coverprofile=c.out -repotoken=$TOKEN Conclusion Countless posts conclude this way, but I'm excited to state that we merely scratched the surface of proper Go coding. The language exposes flexible primitives and unique characteristics one will learn the hard way one experimentation after another. Being able to trade a single binary against a package repository address is such an example, like JavaScript support. This article introduced methods to kick-start Go projects, manage dependencies, organize code, offered guidelines and testing suite. Tweak this opinionated guide to your personal taste, and remember to write simple, testable code. About the author Xavier Bruhiere is the CEO of Hive Tech. He contributes to many community projects, including Occulus Rift, Myo, Docker and Leap Motion. In his spare time he enjoys playing tennis, the violin and the guitar. You can reach him at @XavierBruhiere.
Read more
  • 0
  • 0
  • 10142

article-image-visit-3d-printing-filament-factory-3dkberlin
Michael Ang
02 Sep 2015
5 min read
Save for later

Visit a 3D printing filament factory - 3dk.berlin

Michael Ang
02 Sep 2015
5 min read
Have you ever wondered where the filament for your 3D printer comes from and how it’s made? I recently had the chance to visit 3dk.berlin, a local filament manufacturer in Berlin. 3dk.berlin distinguishes itself by offering a huge variety of colors for their filament. As a designer it’s great to have a large palette of colors to choose from, and I chose 3dk filament for my Polygon Construction Kit workshop at Thingscon 2015 (they’re sponsoring the workshop). Today we’ll be looking at how one filament producer takes raw plastic and forms it into the colored filament you can use in your 3D printer. Some of the many colors offered by 3dk.berlin 3dk.berlin is located at the very edge of Berlin, in the area of Heiligensee which is basically its own small town. 3dk is a family-owned business run by Volker Bernhardt as part of BERNHARDT Kunststoffverarbeitungs GmbH (that’s German for "plastics processing company"). 3dk is focused on bringing BERNHARDT’s experience with injection moulded and extruded plastics to the new field of 3D printing. Inside the factory neutral-colored plastic pellets are mixed with colored "master batch" pellets and then extruded into filament. The extruding machine melts and mixes the pellets, then squeezes them through a nozzle, which determines the diameter of the extruded filament. The hot filament is run through a cool water bath and coiled on large spools. Conceptually it’s quite simple, but getting extremely consistent filament diameter, color and printing properties is demanding. Small details like air and moisture trapped inside the filament can lead to inconsistent prints. Bigger problems like material contamination can lead to a jammed nozzle in your printer. 3dk spent 1.5 years developing and fine tuning their machine before they were satisfied with the results to a German level of precision. They didn’t let me to take pictures of their extrusion machines since some of their techniques are proprietary but you can get a good view of a similar machine in this filament extrusion machine video. Florian (no small guy himself) with a mega-spool from the extrusion machine The filament from the extrusion machine is wound onto 10kg spools - these are big! The filament from these large spools is then rewound onto smaller spools for sale to customers. 3dk tests their filament on a variety of printers in-house to ensure ongoing quality. Where we might do a small print of 20 grams to test a new filament, 3dk might do a "small" test of 2kg! Test print with a full-size plant (about 4 feet tall) Why produce filament in Germany when cheaper filament is available from abroad? Florian Deurer from 3dk explained some of the benefits to me. 3dk gets their PLA base material directly from a supplier that does use additives. The same PLA is used by other manufacturers for items like food wrapping. The filament colorants come from a German supplier and are also "harmless for food". For the colorants in particular there might be the temptation for less scrupulous or regulated manufacturers to use toxic substances like heavy metals or other chemicals. Beyond safety and practical considerations like printing quality, using locally produced filament provides local jobs What really sets 3dk apart from other filament makers in an increasingly competitive field is the range of colors they produce. I asked Florian for some orange filament and he asked "which one?" The colors on offer range from subtle (there’s a whole selection of whites, for example) to more extreme bright colors and metallic effects. Designers will be happy to hear that they can order custom colors using the Pantone color standard (for orders of 5kg / 11lbs and up).   Which white would you like? Standard, milky, or pearl? Looking to the future of 3D printing, it will be great to see more environmentally friendly materials become available. The most popular material for home 3D printing right now is probably PLA plastic (the same material 3dk uses for most of their filament). PLA is usually derived from corn, which is an annually renewable crop. PLA is technically compostable, but this has to take place in industrial composting conditions at high temperature and humidity. People are making progress on recycling PLA and ABS plastic prints back into filament at home but the machines to make this easy and more common are still being developed. 100% recycled PLA print of Origamix_Rabbit by Mirice printed on an i3 Berlin 3dk offers a filament made from industrially recycled PLA. The color and texture for this material varies a little on the spool but I found it to print very well in my first tests and your object ends up a nice slightly transparent olive green. I recently got a "sneak peek" at a filament 3dk is working on that is compostable under natural conditions. This filament is pre-production, so the specifications haven’t been finalized, but Florian told me that the prints are stable under normal conditions but can break down when exposed to soil bacteria. The pigments also contain "nothing bad" and break down into minerals. The sample print I saw was flexible with a nice surface finish and color. A future where we can manufacture objects at home and throw them onto our compost heap after giving them some good use sounds pretty bright to me! A friendlier future for 3D printing? This print can naturally biodegrade About the Author Michael Ang is a Berlin-based artist / engineer working at the intersection of technology and human experience. He is the creator of the Polygon Construction Kit, a toolkit for creating large physical polygons using small 3D-printed connectors. His Light Catchers project collects crowdsourced light recordings into a public light sculpture.
Read more
  • 0
  • 0
  • 10120

article-image-what-i-learned-6-months-open-source-3d-printer
Michael Ang
26 Sep 2014
7 min read
Save for later

What 6 Months with an Open Source 3D Printer Taught Me

Michael Ang
26 Sep 2014
7 min read
3D printing is certainly a hot topic today, and having your own printer at home is becoming increasingly popular. There are a lot of options to choose from, and in this post I'll talk about why I chose to go with an open source 3D printer instead of a proprietary pre-built one, and what my experience with the printer has been. By sharing my 6 months of experience I hope to help you decide which kind of printer is best for you. My Prusa i3 Berlin 3D printer after 6 months Back in 2006 I had the chance to work with a 3D printer when the thought of having a 3D printer at home was mostly a fantasy. The printer in question was made by Stratasys, at the Eyebeam Art+Tech center in New York City. That printer cost upwards of $30,000—not exactly something to have at your house! The idea of doing something wrong with the printer and having to call a technician in to fix it was also a little intimidating. (My website has some of my early experiments with 3D printing.) Flash forward to today and there are literally dozens (or probably hundreds) of 3D printer designs available on the market. The designs range from high-end printers that can print plastic with embedded carbon fiber, to popular designs from MakerBot and DIY kits on eBay. One of the first low-cost 3D printers was the RepRap. The goal of the RepRap project is to create a self-replicating machine, where the parts for the machine can be fabricated by the machine itself. In practice this means that many of the parts of a RepRap-style 3D printer are actually printed on a RepRap printer. Most people who build RepRap printers start with a kit and then assemble the printer themselves. If the idea of a self-replicating machine sounds interesting, then RepRap may be for you. RepRap is now more of a philosophy and community than any specific printer. Once you assemble your printer you can make changes and upgrades to the machine by printing yourself new parts. There are certainly some challenges to building your own printer, though, so let's look at some of the advantages and disadvantages of going with an open source printer (building from a kit) versus a pre-packaged printer. Advantages of a pre-assembled commercial printer: Should print right out of the box Less tinkering needed to get good prints Each printer of a particular model is the same, making it easier to get support Advantages of an open source (RepRap-style) kit: Typically cheaper than pre-built Learn more about how the printer works Easier to make changes to the machine, and complete plans are available Easier to experiment with, for example different printing materials Disadvantages to pre-assembled: Making changes may void your warranty Typically more expensive May be locked into specific software or filament Disadvantages of open source: Can take a lot of work to get good prints Potentially lots of decisions to make, not pre-packaged May spend as much time on the machine as actually printing Technical differences aside, the idea of being part of an open source community based on the freedom to share knowledge and designs was really appealing. With that in mind I had a look at different open source 3D printer designs and capabilities. Since the RepRap designs are open source, anyone can modify them and create a "new" printer. In the end I settled on a variation of the Prusa i3 RepRap printer that is designed in Berlin, where I live. The process of getting a RepRap printer working can be challenging, because there's so much to learn at first. The Prusa i3 Berlin can be ordered as a kit with everything needed to build the printer, and with a workshop where you build the printer with the machine's designers over the course of a weekend. Two days to build a working 3D printer from a pile of parts? Yes, it can be done! Most of the parts in the printer kit Building the printer at the workshop saved an incredible amount of time. Questions like "does this look tight enough?" and "how does this part fit in here?" were answered on the spot. There are very active forums for RepRap printers with lots of people willing to help diagnose problems. But a few questions with even a one day turnaround time quickly adds up. By the end of the two days my printer was fully assembled and actually printed out a little plastic robot! This was pretty satisfying knowing that the printer had started the weekend as a bundle of parts. Quite a lot of wires Assembling the plastic extruders Thus began my 6-month (so far) adventure in 3D printing. It has been an awesome and at times frustrating journey. I mainly bought my printer to create connectors for my Polygon Construction Kit (Polycon). I'm printing connectors that assemble with some rods to make structures much larger than could be printed in one piece. My printer has been working well for that, but the main issue has been reliability and need for continual tweaking. Instead of just "hitting print" there is a constant struggle to keep everything lined up and printing smoothly. Printing on my RepRap is a lot more like baking a soufflé than ordering a burger. Completed printer in my studio Some highlights of the journey so far: Printing out parts strong enough to assemble some of my Polycon sculptures and show them at an art show in Berlin Designing my own accessories for the printer and having them downloaded more than 1,000 times on Thingiverse (not bad for some rather specialized tools) Printing upgrades for the printer, based on the continually updated source files Being able to get replacement parts at the hardware store, when one of the long threaded rods in the printer wore out Sculpture with 3D printed connectors. Image courtesy of Lehrter Siebzehn. And the lowlights: Never quite knowing if a print is going to complete successfully (though this can be a problem with many printers) Having enough trouble getting my first extruder working reliably for long prints that I haven't had time to get dual-extrusion prints working Accessory I designed for calibrating the printer, which I then shared with others As time goes on and I keep working on the printer, it's slowly getting more reliable, and I'm able to do more complicated prints without constant intervention. The learning process has been valuable too - I'm now able to look at basically every part of the machine and understand exactly what it's supposed to do. Once you really understand how a 3D printer works, you start to wonder what kind of upgrades are possible, or what other kinds of machine you could design. Printed upgrade parts A pre-packaged printer makes a lot of sense if you're mostly interested in printing things. The learning process for building your own printer can either be interesting or a frustrating obstacle, depending on your point of view. When you look at a print from your RepRap printer, it's incredible to consider that it is all built off the contributions and sharing of knowledge of a large community. If you're not just interested in making things, but making things that make things, then a RepRap printer might be for you! Upgraded printer with polygon sculpture About the author: Michael Ang is a Berlin-based artist and engineer working at the intersection of art, engineering, and the natural world. His latest project is the Polygon Construction Kit, a toolkit for bridging the virtual and physical worlds by translating simple 3D models into physical structures.
Read more
  • 0
  • 0
  • 10114

article-image-an-introduction-to-reactjs-2
Simon Højberg
14 Jan 2015
1 min read
Save for later

An introduction to React - Part 2 (video)

Simon Højberg
14 Jan 2015
1 min read
  Sample Code You can find the sample code on Simon's Github repository.   About The Author Simon Højberg is a Senior UI Engineer at Swipely in Providence, RI. He is the co-organizer of the Providence JS Meetup group and former JavaScript instructor at Startup Institute Boston. He spends his time building functional User Interfaces with JavaScript, and hacking on side projects like cssarrowplease.com. Simon recently co-authored "Developing a React Edge."
Read more
  • 0
  • 0
  • 10113
article-image-future-service
Edward Gordon
07 Apr 2016
5 min read
Save for later

The Future as a Service

Edward Gordon
07 Apr 2016
5 min read
“As a Service” services (service2?) generally allow younger companies to scale quickly and efficiently. A lot of the hassle is abstracted away from the pain of implementation, and they allow start-ups to focus on the key drivers of any company – product quality and product availability. For less than the cost of proper infrastructure investment, you can have highly-available, fully distributed, buzzword enabled things at your fingertips to start running wild with. However, “as a Service” providers feel like they’re filling a short-term void rather than building long-term viable option for companies. Here’s why. 1. Cost The main driver of SaaS is that there’s lower upfront costs. But it’s a bit like the debit card versus credit card debate; if you have the money you can pay for it upfront and never worry about it again. If you don’t have the money but need it now, then credit is the answer – and the associated continued costs. For start-ups, a perceived low-cost model is ideal at first glance. With that, there’s the downside that you’ll be paying out of your aaS for the rest of your service with them, and moving out of the ecosystem that you thought looked so robust 4 years ago will give the sys admin that you have to hire in to fix it nightmares. Cost is a difficult thing to balance, but there’s still companies still happily running on SQL Server 2005 without any problems; a high upfront cost normally means that it’s going to stick around for ages (you’ll make it work!). To be honest, for most small businesses, investment in a developer who can stitch together open source technologies to suit your needs will be better than running to the closest spangly Service provider. However, aaS does mean you don’t need System Administrators stressing about ORM-generated queries. 2. Ownership of data An under-discussed but vital issue that lies behind the aaS movement is the ownership of data, and what this means to companies. How secure are the bank details of your clients? How does the aaS provider secure against attacks? Where does this fit in terms of compliance? To me, the risks associated with giving your data for another company to keep is too high to justify, even if it’s backed up by license agreements and all types of unhackable SSL things (#Heartbleed). After all, a bank is more appealing to thieves than a safe behind a picture in your living room. Probably*. As a company, regardless of size, your integrity is all. I think you should own that. 3. The Internet as kingmaker We once had an issue at the Packt office where, during a desk move, someone plugged an Internet cable (that’s the correct term for them, right?) from one port to another, rather than into their computer. The Internet went down for half the day without anyone really knowing what was going on. Luckily, we still had local access to stuff – chapters, databases, schedules, and so on. If we were fully bought into the cloud we would have lost a collective 240 man hours from one office because of an honest mistake. Using the Internet as your only connection point to the data you work with can, and will, have consequences for businesses who work with time-critical pieces of data. This leaves an interesting space open that, as far as I’m aware, very few “as a Service” providers have explored; hybrid cloud. If the issue, basically, is the Internet and what cloud storage means to you operationally and in terms of data compliance, then a world where you can keep sensitive and “critical” data local while keeping bulk data with your cloud provider, then you can leverage the benefits of both worlds. The advantages of speed and lack of overheads would still be there, as well as the added security of knowing that you’re still “owning” your data and your brand reputation. Hybrid clouds generally seem to be an emergent solution in the market at large. There are even solutions now on Kickstarter that provide you with a “cloud” where you own your data. Lovely. Hell, you can even make your own PaaS with Chef and Docker. I could go on. The quite clear popularity of “as a Service” products means there’s value in the services they’re offering. At the moment though, there’s enough problems inherent in adoption to believe that they’re a stop-gap to something more finite. The future, I think, lies away from the black and white of aaS and on-premises software. There’s advantages in both, and as we continue to develop services and solutions that blend the two, I think we’re going to end up at a more permanent solution to the argument. *I don’t actually advocate the safe behind a picture method. More of a loose floorboard man myself. From 4th-10th April, save 50% on 20 of out top cloud titles. From AWS to Azure and OpenStack - and even Docker for good measure - learn how to build the services of tomorrow. If one isn't enough, grab 5 for just $50! Find them here.
Read more
  • 0
  • 0
  • 10110

article-image-responsive-design-is-hard
Ed Gordon
29 Oct 2014
7 min read
Save for later

Responsive Web Design is Hard

Ed Gordon
29 Oct 2014
7 min read
Last week, I embarked on a quest to build my first website that would simultaneously deliver on two puns; I would “launch” my website with a “landing” page that was of a rocket sailing across the stars. On my journey, I learned SO much that it probably best belongs in a BuzzFeed list. 7 things only a web dev hack would know “Position” is a thing no one on the Internet knows about. You change the attribute until it looks right, and hope no one breaks it. The Z-index has a number randomly ascribed until the element goes where you want. CSS animations are beyond my ability as someone who’s never really written CSS before. So is parallax scrolling. So is anything other than ‘width: x%’. Hosting sites ring you. All the time. They won’t leave you alone. The more tabs you have open the better you are as a person. Alt+Tab is the best keyboard hack ever. Web development is 60% deleting things you once thought were integral to the design. So, I bought a site, jslearner.com (cool domain, right?), included the boilerplate Bootstrap CDN, and got to work. Act I: Design, or, ‘how to not stick to plan’ Web design starts with the design bit, right? My initial drawing, like all great designs, was done on the back of an envelope that contained relatively important information. (Author’s note: I’ve now lost the envelope because I left it in the work scanner. Please can I get it back?!) As you can clearly see from the previous image, I had a strong design aesthetic for the site, right from the off. The rocket (bottom left) was to travel along the line (line for illustration purposes only) and correct itself, before finally landing on a moon that lay across the bottom of the site. In a separate drawing, I’d also decided that I needed two rows consisting of three columns each, so that my rocket could zoom from bottom left to top right, and back down again. This will be relevant in about 500 words. Confronting reality I’m a terrible artist, as you can see from my hand-drawn rocket. I have no eye for design. After toying with trying to draw the assets myself, I decided to pre-buy them. The pack I got from Envato, however, came as a PNG and a file I couldn’t open. So, I had to hack the PNG (puts on shades): I used Pixlr and magic-wanded the other planets away, so I was left with a pretty dirty version of the planet I wanted. After I had hand-painted the edges, I realised that I could just magic-wand the planet I wanted straight out. This wouldn’t be the first 2 hours I wasted. I then had to get my rocket in order. Another asset paid for, and this time I decided to try and do it professionally. I got Inkscape, which is baffling, and pressed buttons until my rocket looked like it had come to rest. So this: After some tweaking, became this: After flipping the light sources around, I was ready to charge triumphantly on to the next stage of my quest; the fell beast of design was slain. Development was going to be the easy part. My rocket would soar across the page, against a twinkling backdrop, and land upon my carefully crafted assets. Act II: Development, or, ‘responsive design is hard’ My first test was to actually understand the Bootstrap column thingy… CSS transformations and animations would be taking a back seat in the rocket ship. These columns and rows were to hold my content. I added some rules to include the image of the planets and a background color of ‘space blue’ (that’s a thing, I assure you). My next problem was that the big planet wasn’t sitting at the bottom of the page. Nothing I could do would rectify this. The number of open tabs is increasing… This was where I learned the value of using the Chrome/Mozilla developer tools to write rules and see what works. Hours later, I figured out that ‘fixed position’ and ‘100% width’ seemed to do the trick. At this point, the responsive element of the site was handling itself. The planets generally seemed to be fine when scaling up and down. So, the basic premise was set up. Now I just had to add the rocket. Easy, right? Responsive design is really quite hard When I positioned my rocket neatly on my planet – using % spacing of course – I decided to resize the browser. It went literally everywhere. Up, down, to the side. This was bad. It was important to the integrity of my design for the rocket to sit astride the planet. The problem I was facing was that I just couldn’t get the element to stay in the same place whilst also adjusting its size. Viewing it on a 17-inch desktop, it looked like the rocket was stuck in mid-air. Not the desired effect. Act III: Refactoring, or, ‘sticking to plan == stupid results’ When I ‘wireframed’ my design (in pencil on an envelope), for some reason I drew two rows. Maybe it’s because I was watching TV, whilst playing Football Manager. I don’t know. Whatever the reason, the result of this added row was that when I resized, the moon stuck to its row, and the rocket went up with the top of the browser. Responsive design is as much about solid structure as it is about fancy CSS rules. Realising this point would cost me hours of my life. Back to the drawing board. After restructuring the HTML bits (copy/paste), I’d managed to get the rocket/moon in to the same div class. But it was all messed up, again. Why tiny moon? Why?! Again, I spent hours tweaking CSS styles in the browser until I had something closer to what I was looking for. Rocket on moon, no matter the size. I feel like a winner, listen to the Knight Rider theme song, and go to bed. Act IV: Epiphany, or, ‘expectations can be fault tolerant’ A website containing four elements had taken me about 15 hours of work to make look ‘passable’. To be honest, it’s still not great, but it does work. Part of this is my own ignorance of speedier development workflows (design in browser, use the magic wand, and so on). Another part of this was just how hard responsive design is. What I hadn’t realised was how much of responsive design depends on clever structure and markup. I hadn’t realised that this clever structure doesn’t even start with HTML – for me, it started with a terrible drawing on the back of an envelope. The CSS part enables your ‘things’ to resize nicely, but without your elements in the right places, no amount of {z-position: -11049;} will make it work properly. It’s what makes learning resources so valuable; time invested in understanding how to do it properly is time well spent. It’s also why Bootstrap will help make my stuff look better, but will never on its own make me a better designer.
Read more
  • 0
  • 0
  • 10086

article-image-5-ways-machine-learning-is-transforming-digital-marketing
Amey Varangaonkar
04 Jun 2018
7 min read
Save for later

5 ways Machine Learning is transforming digital marketing

Amey Varangaonkar
04 Jun 2018
7 min read
The enterprise interest in Artificial Intelligence is surging. In an era of cut-throat competition where it’s either do or die, businesses have realized the transformative value of AI to gain an upper hand over their rivals. Given its direct contribution to business revenue, it comes as no surprise that marketing has become one of the major application areas of machine learning. Per Capgemini, 84% of marketing organizations are implementing Artificial Intelligence in 2018, in some capacity 3 out of the 4 organizations implementing AI techniques have managed to increase the sales of their products and services by 10% or more. In this article, we look at 5 innovative ways in which machine learning is being used to enhance digital marketing. Efficient lead generation and customer acquisition One of the major keys to drive business revenue is getting more customers on board who will buy your products or services repeatedly. Machine learning comes in handy to identify potential leads and convert those leads into customers. With the help of the pattern recognition techniques, it is possible to understand a particular lead’s behavioral and purchase trends. Through predictive analytics, it is then possible to predict if a particular lead will buy the product or not. Then, that lead is put into the marketing sales funnel to perform targeted marketing campaigns which may ultimately result into a purchase. A cautionary note here - with GDPR (General Data Protection Regulation) in place across the EU (European Union), there are restrictions in the manner AI algorithms can be used to make automated decisions based on the consumer data. This will make it imperative for the businesses to strictly follow the regulation and operate under its purview, or they could face heavy penalties. As long as businesses respect privacy and follow basic human decency such as asking for permission to use a person’s data or informing them about how their data will be used, marketers can reap the benefits of data driven marketing like never before. It all boils down to applying common sense while handling personal data, as one GDPR expert put it. But we all know how uncommon, that sense is! Customer churn prediction is now possible ‘Customer churn rate’ is a popular marketing term referring to the number of customers who opt out of a particular service offered by the company over a given time period. The churn time is calculated based on the customer’s last interaction with the service or the website. It is crucial to track the churn rate as it is a clear indicator of the progress - or the lack of it - that a business is making. Predicting the customer churn rate is difficult - especially for e-commerce businesses selling a product - but it is not impossible thanks to machine learning. By understanding the historical data and the user’s past website usage patterns, these techniques can help a business identify the customers who are most likely to churn out soon and when that is expected to happen. Appropriate measures can then be taken to retain such customers - by giving special offers and discounts, timely follow-up emails, and so on - without any human intervention. American entertainment giants Netflix make perfect use of churn prediction to keep the churn rate at just 9%, lower than any of the subscription streaming services out there today. Not just that, they also manage to market their services to drive more customer subscriptions. Dynamic pricing made easy In today’s competitive world, products need to be priced optimally. It has become imperative that companies define an extremely competitive and relevant pricing for their products, or else the customers might not buy them. On top of this, there are fluctuations in the demand and supply of the product, which can affect the product’s pricing strategy. With the use of machine learning algorithms, it is now possible to forecast the price elasticity by considering various factors such as the channel on which the product is sold. Other  factors taken into consideration could be the sales period, the product’s positioning strategy or the customer demand. For example, eCommerce giants Amazon and eBay tweak their product prices on a daily basis. Their pricing algorithms take into account factors such as the product’s popularity among the customers, maximum discount that can be offered, and how often the customer has purchased from the website. This strategy of dynamic pricing is now being adopted by almost all the big retail companies even in their physical stores. There are specialized software available which are able to leverage machine learning techniques to set dynamic prices to the products. Competera is one such pricing platform which transforms retail through ongoing, timely, and error-free pricing for category revenue growth and improvements in customer loyalty tiers. To know more about how dynamic pricing actually works, check out this Competitoor article. Customer segmentation and radical personalization Every individual is different, and has unique preferences, likes and dislikes. With machine learning, marketers can segment users into different buyer groups based on a variety of factors such as their product preferences, social media activities, their Google search history and much more. For instance, there are machine learning techniques that can segment users based on who loves to blog about food, or loves to travel, or even which show they are most likely to watch on Netflix! The website can then recommend or market products to these customers accordingly. Affinio is one such platform used for segmenting customers based on their interests. Content and campaign personalization is another widely-recognized use-case of machine learning for marketing. Machine learning algorithms are used to build recommendation systems that take into consideration the user’s online behavior and website usage to analyse and recommend products that he/she is likely to buy. A prime example of this is Google’s remarketing strategy, which tries to reconnect with the customers who leave the website without buying anything by showing them relevant ads across different devices. The best part about recommendation systems is that they are able to recommend two completely different products to two customers with a different usage pattern. Incorporating them within the website has turned out to be a valuable strategy to increase the customer’s loyalty and the overall lifetime value. Improving customer experience Gone are the days when the customer who visited a website had to use the ‘Contact Me’ form in case of any query, and an executive would get back with the answer. These days, chatbots are integrated in almost every ecommerce website to answer ad-hoc customer queries, and even suggest them products that fit their criteria. There are live-chat features included in these chatbots as well, which allow the customers to interact with the chatbots and understand the product features before they buy any product. For example, IBM Watson has a really cool feature called the Tone Analyzer. It parses the feedback given by the customer and identifies the tone of the feedback - if it’s angry, resentful, disappointed, or happy. It is then possible to take appropriate measures to ensure that the disgruntled customer is satisfied, or to appreciate the customer’s positive feedback - whatever may be the case. Marketing will only get better with machine learning Highly accurate machine learning algorithms, better processing capabilities and cloud-based solutions are now making it possible for companies to get the most out of AI for their marketing needs. Many companies have already adopted machine learning to boost their marketing strategy, with major players such as Google and Facebook already leading the way. Safe to say many more companies - especially small and medium-sized businesses - are expected to follow suit in the near future. Read more How machine learning as a service is transforming cloud Microsoft Open Sources ML.NET, a cross-platform machine learning framework Active Learning : An approach to training machine learning models efficiently
Read more
  • 0
  • 30
  • 10021
article-image-max-fatouretchi-explains-the-3-main-pillars-of-effective-customer-relationship-management
Packt Editorial Staff
03 Jun 2019
6 min read
Save for later

Max Fatouretchi explains the 3 main pillars of effective Customer Relationship Management

Packt Editorial Staff
03 Jun 2019
6 min read
Customer Relationship Management (CRM) is about process efficiency, reducing operational costs, and improving customer interactions and experience. The never-ending CRM journey could be beautiful and exciting, and it's something that matters to all the stakeholders in a company. One important saying is that CRM matters to all roles in a company and everyone needs to feel the sense of ownership right from the beginning of the journey. In this article we will look at 3 main pillars for effective customer relationship management. This article is an excerpt taken from the book The Art of CRM, written by Max Fatouretchi. Max, founder of Academy4CRM institute, draws on his experience over 20 years and 200 CRM implementations worldwide.The book covers modern CRM opportunities and challenges based on the author’s years of experience including AI, machine learning, cloud hosting, and GDPR compliance. Three key pillars of CRM The main role of the architect is to design a solution that can not only satisfy the needs and requirements of all the different business users but at the same time have the agility and structure for a good foundation to support future applications and extensions. Having understood the drivers and the requirements, you are ready to establish the critical quality properties the system will have to exhibit in order to identify scenarios to characterize each one of them. The output of the process is a tree of attributes, a so-called quality attribute tree including usability, availability, performance, and evolution. You always need to consider that the CRM rollout in the company will affect everyone, and above all, it needs to support the business strategies while improving operational efficiencies, enabling business orchestration, and improving customer experience over all the channels. Technically speaking, there are three main pillars for any CRM implementation; these enable value to the business: Operational CRM The operational CRM is all about marketing, sales, and services functionalities. We will cover some case studies later in this book from different projects I've personally engaged with across a wide area of applications. Analytical CRM The analytical CRM will use the data "collected" from the operational CRM and provide the users and business leaders with individual KPIs, dashboards, and analytical tools in order to enable them to slice and dice the data about their business performance as they need. This foundation is for the business orchestration. Collaboration CRM The collaboration CRM will provide the technology to integrate all kinds of communication channels and front-ends with core CRM for both internal and external users, for employees, partners, and for customers so-called bring your own device. This includes support for different types of devices that could integrate with the CRM core platform and be administered with the same tools, leverage the same infrastructure including security, and maintenance. It's using the same platform, same authentication procedures, same workflow engine and fully leveraging the core entities and data. With these three pillars in place, you'll be able to create a comprehensive view of your business and manage client's communication over all your channels. Through this, you'll have the ingredients for predictive client insights, business intelligence, marketing, sales, and services automation. But before we move on, Figure 1.1 is an illustration of the three pillars of a CRM solution and related modules, which should help you visualize what we've just talked about: Figure 1.1: The three pillars of CRM It's also important to remember that any CRM journey always begins with either a business strategy and/or a business pain-point. All of the stakeholders must have a clear understanding of where the company is heading to, and what the business drivers for the CRM investment are. It's also important for all CRM team members to remember that the potential success or failure of CRM projects remains primarily on business stakeholders and not on the IT staff. Role-based ownership in CRM Typically, the business decision makers are the ones bringing up the need and sponsoring the CRM solution. Often but not always, the IT department is tasked with the selection of the platform and conducting the due diligence with a number of vendors. More importantly, while different business users may have different roles and expectations from the system, everyone needs to have a common understanding of the company's vision, while the team members need to support the same business strategies at the highest level. The team will work together towards the success of the project for the company as a whole while having individual expectations. In addition to that, you will notice that the focus and the level of engagement of people involved in the project (project team) will vary during the lifecycle of the project as time goes on. It also helps to categorize the characteristics of team members from visionary to leadership, stakeholders, and owners. While key sponsors are more visionary, and usually, the first players to actively support and advocate for a CRM strategy, they will define the tactics, and end users will ultimately take more ownership during the deployment and operation phase. In the Figure 1.2 we see the engagement level of stakeholders, key-users, and end-users in a CRM implementation project. The visionaries are here to set the company's vision and strategies for the CRM, the key users (department leads) are the key-sponsors who promote the solution, and the end-users are to engage in reviews and provide feedback. Figure 1.2: CRM role based ownership Before we start the development, we must have identified the stakeholders, and have a crystal-clear vision of the functional requirements based on the business requirements. Furthermore, we must also ensure we have converted these to a detail specification. All this is done by business analysts, project managers, solution specialists, and architects, with the level of IT engagement being driven by the outcome of this process. This will also help to define your metrics for business Key Performance Indicators (KPI) figures and for TCO/ROI (Total-Cost-of-Ownership and Return-on-Investment) of the project. These metrics are a compass and a measurement tool for the success of your CRM project and will help enable your need to justify your investment but also allow you to measure the improvements you've made. You will also use these metrics as a design guide for an efficient solution that not only provides the functionalities supporting the business requirements and justification of your investment but something that also delivers data for your CRM dashboards. This data can then help fine tune the business processes for higher efficiencies going forward. In this article, we've looked at all the important elements of a CRM system, including operational CRM, analytical CRM, and collaboration CRM. Bringing CRM up to date, The Art of CRM shows how to add AI and machine learning, ensure compliance with GDPR, and choose between on-premise, cloud, and hybrid hosting solutions. What can Artificial Intelligence do for the Aviation industry 8 programming languages to learn in 2019 Packt and Humble Bundle partner for a new set of artificial intelligence eBooks and videos
Read more
  • 0
  • 0
  • 10002

article-image-dispelling-myths-hybrid-cloud
Ben Neil
24 May 2017
6 min read
Save for later

Dispelling the myths of hybrid cloud

Ben Neil
24 May 2017
6 min read
The words "vendor lock" worry me more than I'd like to admit. Whether it's having too many virtual machines in ec2, an expensive lambda in Google Functions, or any random offering that I have been using to augment my on-premise Raspberry Pi cluster, it's really something I'vefeared. Over time, I realize it has impacted the way I have spoken about off-premises services. Why? Because I got burned a few times. A few months back I was getting a classic 3 AM call asking to check in on a service that was failing to report back to an on premise Sensu server, and my superstitious mind immediately went to how that third-party service had let my coworkers down. After a quick check, nothing was broken badly, only an unruly agent had hung on an on-premise virtual machine. I’ve had other issues and wanted to help dispel some of the myths around adopting hybrid cloud solutions. So, to those ends, what are some of these myths and are they actually true? It's harder and more expensive to use public cloud offerings Given some of the places I’ve worked, one of my memories was using VMware to spin up new VMs—a process that could take up to ten minutes to get baseline provisioning. This was eventually corrected by using packer to create an almost perfect VM, getting that into VMware images was time consuming, but after boot the only thing left was informing the salt master that a new node had come online.  In this example, I was using those VMs to startup a Scala http4s application that would begin crunching through a mounted drive containing chunks of data. While the on-site solution was fine, there was still a lot of work that had to be done to orchestrate this solution. It worked fine, but I was bothered by the resources that were being taken for my task. No one likes to talk with their coworker about their 75 machine VM cluster that bursts to existence in the middle of the workday and sets off resource alarms. Thus, I began reshaping the application using containers and Hyper.sh, which has lead to some incredible successes (and alarms that aren't as stressful), basically by taking the data (slightly modified), which needed to be crunched and adding that data to s3. Then pushing my single image to Hyper.sh, creating 100 containers, crunching data, removing those containers and finally sending the finalized results to an on premise service—not only was time saved, but the work flow has brought redundancy in data, better auditing and less strain on the on premise solution. So, while you can usually do all the work you need on-site, sometimes leveraging the options that are available from different vendors can create a nice web of redundancy and auditing. Buzzword bingo aside, the solution ended up to be more cost effective than using spot instances in ec2. Managing public and private servers is too taxing I’ll keep this response brief; monitoring is hard, no matter if the service, VM, database or container,is on-site or off. The same can be said for alerting, resource allocation, and cost analysis, but that said, these are all aspects of modern infrastructure that are just par for the course. Letting superstition get the better of you when experimenting with a hybrid solution would be a mistake.  The way I like to think of it is that as long as you have a way into your on-site servers that are locked down to those external nodes you’re all set. If you need to setup more monitoring, go ahead; the slight modification to Nagios or Zappix rules won’t take much coding and the benefit will always be at hand for notifying on-call. The added benefit, depending on the service, which exists off-site is maybe having a different level of resiliency that wasn't accounted for on-site, being more highly available through a provider. For example, sometimes I use Monit to restart a service or depend on systemd/upstart to restart a temperamental service. Using AWS, I can set up alarms that trigger different events to handle a predefined run-book’s, which can handle a failure and saves me from that aforementioned 3am wakeup. Note that both of these edge cases has their own solutions, which aren’t “taxing”—just par for the course. Too many tools not enough adoption You’re not wrong, but if your developers and operators are not embracing at least a rudimentary adoption of these new technologies, you may want to look culturally. People should want to try and reduce cost through these new choices, even if that change is cautious, taking a second look at that s3 bucket or Pivotal cloud foundry app nothing should be immediately discounted. Because taking the time to apply a solution to an off-site resource can often result in an immediate saving in manpower. Think about it for a moment, given whatever internal infrastructure you’re dealing with, the number of people that are around to support that application. Sometimes it's nice to give them a break. To take that learning curve onto yourself and empower your team and wiki of choice to create a different solution to what is currently available to your local infrastructure. Whether its a Friday code jam, or just taking a pain point in a difficult deployment, crafting better ways of dealing with those common difficulties through a hybrid cloud solution can create more options. Which, after all, is what a hybrid cloud is attempting to provide – optionsthat can be used to reduce costs, increase general knowledge and bolster an environment that invites more people to innovate. About the author Ben Neil is a polyglot engineer who has the privilege to fill a lot of critical roles, whether it's dealing with front/backend application development, system administration, integrating devops methodology or writing. He has spent 10+ years creating solutions, services, and full lifecycle automation for medium to large companies.  He is currently focused on Scala, container and unikernel technology following a bleeding edge open source community, which brings the benefits to companies that strive to be at the foremost of technology. He can usually be found either playing dwarf fortress or contributing on Github. 
Read more
  • 0
  • 0
  • 9962
Modal Close icon
Modal Close icon