Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-employee-or-freelance-what-skill-2016-reveals-you-should-do
Sam Wood
19 Jul 2016
3 min read
Save for later

Employee or Freelance? What Skill Up 2016 reveals you should do

Sam Wood
19 Jul 2016
3 min read
At some point in their professional career, almost every developer has asked themselves the same question - corporate or freelance? Does it make more sense to work a salaried full time position, or to strike off by oneself into the world of clients and being your own boss? We were wondering the same thing - so we took a dive into the Skill Up 2016 data to find out. Go Freelance, or work for the Man? To keep things simple, our analysis focused on results from the Anglosphere - in particular, the United Kingdom and the USA. We ran a comparison of the salaries of full time workers versus freelancers, which is charted below. This chart shows the cumulative distribution of stated salaries in the UK and the US - which shows an interesting trend. Below the $100,000 mark it pays much better to work in a full time position. However, when we get into the upper range of salaries, the top 40% of freelancers and contractors get paid significantly more than their peers 'on payroll'. Where's best to work freelance? What industries are these highly paid freelancers working in to get such great salaries? If you're just looking at the mean average salary, then the usual 'top three' industries of Insurance, Healthcare and Banking come out on top. But what do we see if we look at median income? Banking, Healthcare and Insurance are all still ranked - but it's Cyber Security that comes out on top by a fair margin. This is no doubt reflective of the massive demand and much greater importance industries are giving to Cyber Security in the last few years. If you're looking for a lucrative career in tech whilst still being your own boss, it looks like you might want to learn pentesting. We also took a comparison of UK freelancers and full time workers against the same in the US, to find out which country was more rewarding of freelance work. In the UK, it generally pays to work full time for a company - but in the US, higher salaries can be earned by taking the leap and going freelance. How to find freelance and contracting work So, how do successful freelancers and contractors find themselves new clients and new jobs? We asked our respondents how they came across their contract work - and picked out the common themes. About 50% of Freelancers find their work through their personal networking (it always pays to have friends) which the other half make use of popular freelancing sites. When analysed by age, younger freelancers overwhelmingly favored finding their work by utilizing online services such as Freelancer.com and Upwork. These services are a good way to develop the personal network which they can then rely on later in their careers, like their more experienced peers do.
Read more
  • 0
  • 0
  • 2855

article-image-year-was-2014-game-development
Ed Bowkett
01 Apr 2015
6 min read
Save for later

The Year that was: 2014 in Game Development

Ed Bowkett
01 Apr 2015
6 min read
This blog will focus on the year that was, the top 5 important events to come out of game development and what implications this had on the wider community. Bear in mind this is my opinion, but feel free to share other events you found equally as important. 1) AAA game engines become more freely available As I mentioned in one of my first blogs of 2014, the Game Developers Conference this year was pretty spectacular for one reason. The three industry standard game engines, Crytek, Epic Games and Unity both announced major updates to their engines. Unreal introduced a price point of $19 a month to gain full access to their AAA engine. Crytek too introduced a price paint that is exceptional, $10 a month for their amazing engine, CryEngine. For these prices, not only can budding game developers finally develop awesome games, but the tools that were really for studio only, has now expanded to consumers such as you and me. This seismic change can only be a positive change for the game industry. 2) VR moved forward…kind of As linked above in my first blog, I confessed I didn’t get Virtual Reality. This is mostly down to my genetics where I get motion sickness, but also because every time a VR headset is announced, it seems to get more hype than feels necessary. I mean take the Oculus Rift. People got excited for it and stated that both the headset and VR in general would revolutionize the way we would play games. As soon as Oculus Rift got purchased for a pretty handsome sum by Facebook, the critics and naysayers came out and went into overload. If Virtual Reality is set to be a big thing, and it's my opinion it will be, then criticism should not be focused on whoever is developing the headset, but the technology itself. Don't abandon a technology in its infancy because a social network you don't particularly like has decided VR is worth investing in. Wait and give it a chance to mature. If you insist that the Oculus Rift is not the way forward, because heaven forbid, it has the Zuckerberg Curse on it, there are alternatives like the Project Morpheus from Sony, the Archos VR Headset and Samsung’s VR Headset. 3) Microsoft buys Mojang Minecraft is a classic of a game. Endless hours poured into constructing buildings and structures, ranging from castles to the USS Enterprise as well as exploring a seemingly endless world. What also made Minecraft so special was that it wasn’t a large studio that made it; it was an indie developer, Markus ‘Notch’ Persson who lovingly created the game. So it came as a little bit of a surprise when Microsoft bought Mojang for a reported $2.5 billion. Whilst little has been done since the purchase (it’s only been 3 months) this purchase feels quite shrewd from Microsoft, bearing in mind sales of its flagship console, the Xbox One has been struggling against its competitor, the PS4. By purchasing such an iconic game studio as Mojang thus providing Microsoft with such a fanbase, this gives it a platform to again begin to develop and grow its gamer base. 4) Quality of games decreasing This section is going to be a bit of a rant which I apologise for. Assassin’s Creed Unity was a highly anticipated game. Priced at a pricey $60 you would expect, as a hard working gamer, to get a high quality game on release. Sadly this wasn’t the case. There were so many glitches, particularly for the PC version that I felt quite embarrassed I had forked out $60 for it. Whilst a day one patch did happen, it shook my confidence in buying games at full price again. I would rather wait for a legendary Steam sale where the game has already been through several patches. Whilst I accept that sometimes, glitches just happen, the added pressure of both the price to the consumer, combined with other alternatives for gamers to enjoy, means that the game needs to be of the highest quality when released. The fact it wasn’t shakes consumer confidence and lessens the enjoyment factor. Plus not everyone has great internet speeds to download monstrous patches. It also beggars belief that none of the glitches were noticed when internal testing occurred. Having tested games briefly in 2008, I had many a great time glitching on various pieces of terrain. But it was reported and as such the glitch fixed. The fact that so many glitches went unnoticed suggests more to the internal testing then it does the time constraints of getting the game out on time. Nonetheless, Assassin’s Creed is a hugely popular franchise, as such, thousands of gamers purchased it and unfortunately (or fortunately if you are Ubisoft) the market has spoken and we will continue to purchase buggy games. 5) GamerGate I was pondering whether to include this at all, given I’ve not blogged about it in the past and done my best to avoid discussing this in any of my blogs or tweets. The truth of the matter it has to be mentioned. For those that don’t know, Gamergate was sparked from an accusation that girl game developers slept with game reviewers to get more favorable reviews. From this spawned a wide ranging debate that can be interpreted by some as a necessary conversation for the gaming industry to have, by others as a cesspit of toxicity and harassment. The argument itself has mutated so much that as it stands I no longer know what it stands for. During its short time, it’s stood for free speech, video game journalism ethics, to connotations of racism, homophobia and a troll feeding pit. Sadly what it’s succeeded in doing is showing the video games industry in a negative light. This is a big shame, for an industry still arguably in its infancy stage. I hope that 2015 will bring about what makes the gaming community so special and we can get back to what it should be about, making great art that are loved by all. It will be interesting how the gaming industry grows from Gamergate and if it can.
Read more
  • 0
  • 0
  • 2838

article-image-how-create-tech-teams-talk
Hari Vignesh
16 Jul 2017
5 min read
Save for later

How to create tech teams that talk

Hari Vignesh
16 Jul 2017
5 min read
Great tech teams are not born, they’re made. While greatness can be a product of stringent and cutthroat practices, building a talented and happy team can be a pleasant — not painful — process. It won’t be easy, though. Keeping your tech team motivated isn’t just about throwing a budget for a monthly dinner. If you want to retain your best and brightest, you’ll need to establish organizational excellence, giving employees opportunities to develop or do different work.  “Employees want interesting work that challenges them. Performing meaningful work gives them a feeling that what they do is important, and provides opportunities for growth so that they feel competent,” says Irene de Pater, an assistant professor at the National University of Singapore (NUS) Business School’s Department of Management and Organization. Don’t build a wall around your tech team  It’s important to include other key team members in the interview process. A poor culture fit can lead to turnover that costs companiesup to 60 percent of the person’s annual salary. A good fit ensures that engineers and members of different functions can effectively communicate and work with each other. Especially in products requiring complicated engineering, companies may risk critical failure if engineers are not coordinated. For example, ina case by the Harvard Business Review, the A380 “superjumbo” by Airbus overran time and budget constraints due to incompatibilities in the design of the plane’s fuselage. This was discovered late in development. They could have avoided this by using a shared communication platform and compatible computer-aided design (CAD) tools.  To bring down this wall around your team, you can start pairing up members initially on small projects. Let the paired programmers share a single desktop and review the code together. This allows developers to work together to find the best approach to creating good code. Give people space, lots of it  Keeping your developers happy isn’t a mystery. Give them space, and let them build stuff. Being able to invent and innovate without pressure allows employees to see their work as meaningful, and helps them develop closer relationships with others. “Employees want good relationships with their colleagues and superiors so that they feel they have friendships and social support at work,” Professor de Pater says. Feedback can sting, but it shouldn’t hurt too much  Having a culture of honest feedback will encourage employees to contribute thoughts and ideas more fearlessly. Constructive honesty can be part of the training process. Managers can focus on positive reasons for giving feedback, prepare for the session well, and handle emotional reactions calmly. For example, instead of telling a new coder that his work isn’t up to your standards, a senior developer would first start a conversation focusing on what the team is trying to achieve — and whether the code is being written in the best approach. This way, the feedback is non-aggressive.  Moreover, tech teams should focus on giving 360-feedback. Traditionally, feedback is top-down. “360-feedback” is when supervisors, subordinates, and peers provide staff with constructive advice, allowing an objective and holistic look into a person’s work and relationships in the company. When delivered supportively, 360-feedback can increase self-awareness and improve individual and team effectiveness,studies show. The feedback needs to be translated into intentional action to develop new habits, or change existing ones to remain effective. Find passionate technologists  Another step in building your dream team is to find passionate technologists. We’re not talking about developers who come to the office and don’t complain. We’re talking about people who spend their whole day coding, just to go home and work on a side project — a blog, an app, an open source project.  You want these people on your side, because they’ll go above and beyond when it comes to the technical side of your organization. They’ll work hard to keep you up-to-date on the latest techniques and will take pride in helping your company succeed. It’s easy to spot these passionate individuals during the interview process. All you need to do is ask interviewees about outside projects. If they’re passionate about what they do, you’ll be able to tell by how they talk about it. Grab the passionate ones. The challenge of coordination  With so many teams moving so quickly, coordination will become a challenge. This is partly addressed by returning to a core team principle: strive for autonomy and independence. You must encourage teams to pursue projects that are within their power to take from idea to completion without the immediate need for external help. However, this eliminates the need to coordinate amongst teams altogether, and the fact is, there are inevitably projects that require multiple teams to collaborate. In those cases, there are four ways to improve coordination: Each of your teams have a planning meeting every two weeks. Anyone can attend these meetings. Each of these teams have a demo every two weeks, in which they show off the work they’ve done recently. Every team that is working together can attend each other’s demos. Weekly product backlog meeting, where all product teams share upcoming projects and discuss metrics related to recently-launched features. Finally, each team’s lead developer and product owner takes specific responsibility of pro-actively reaching out to other teams to discuss upcoming work.  These approaches are intentionally lightweight and simple. They rely on people’s own initiative to share their work, communicate actively with others, and work out the details themselves to address the many challenges of coordination.  About the Author  Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades. 
Read more
  • 0
  • 0
  • 2837

article-image-knowledge-motion
Sam Wood
17 Nov 2015
3 min read
Save for later

Knowledge in Motion

Sam Wood
17 Nov 2015
3 min read
Since releasing their first video training title in March 2013, Packt Publishing have been working hard on producing high quality video learning. Now, to celebrate their 150th video title, Packt are inviting everyone to try out learning in motion with a massive 80% discount sale across all their video products. Why learn from video? See example projects in action Write your code along with your instructor Master a technology in just a few hours Why does Packt believe in video learning? Packt wants to show the world how to put software to work. When surveyed, Packt's video users said that they most prized the practicality of video, and the ability to be able to watch a project working visually. Video offers the chance for learners to write their code along with their instructor, to really go hands-on when getting to grips with a new technology. In just a few hours, video lets its viewers gain a level of technical mastery - whether pushing their skills to the limit, or diving in to something completely new. What does Packt Video offer? Courses on Web Development, Big Data, App Dev, Game Dev, and more Over 350 hours of Video Training spread across 150+ courses Expert authors from companies such as Google, IBM, and Yahoo Time is valuable - so all of Packt Video is built around the twin concepts of Curation and Concision. Expert authors and editors work hard to ensure each video is focused on the key information on each subject, taught in the most efficient and informative manner possible. All Packt video courses are authored by experts from across the globe - all with working professional experience as top consultants, trainers, or lead developers at some of the world’s most innovative organisations. From Python to JavaScript, 3D Printing to Data Visualization, Packt Video encompasses a huge variety of subjects. Packt are proud to know that 9/10 developers would buy a Packt Video Course again, and 8/10 would recommend Packt’s Video Courses to their friends and colleagues. Get started with Video Week 80% off all courses for this week only Free courses available to view for this week only with a Packt login Want to try out Packt Video and start skilling up with new and dynamic learning? For this week only, all video courses are on offer at an amazing 80% discount. Top courses are available to view for free in our PacktLib web platform for anyone with a Packt account - check them out now!
Read more
  • 0
  • 0
  • 2828

article-image-get-most-out-your-vr-content-maximizing-reach-your-immersive-experience-part-2
Andreas Zeitler
10 Mar 2017
6 min read
Save for later

Get the most out of your VR content: Maximizing the reach of your immersive experience - Part 2

Andreas Zeitler
10 Mar 2017
6 min read
This post gives a short summary of the landscape of VR devices and tools available today, continuing with non-mobile devices (read about mobile devices in the first part). I will also outline some pitfalls. A follow-up post will talk about the tools used to create VR content and optimizing your content for maximum reach.  Dedicated VR Devices If you plug it into your PC, it’s a dedicated VR device. The most popular ones are Oculus Rift and HTC Vive (developed by Valve, but manufactured by HTC). Another one is the Playstation VR set, which only works with Playstation, but is a dedicated VR device nonetheless. All dedicated VR devices include (or offer on top) sensors for positional head tracking and controllers for both hands that track the hands and arms in relation to the head of the user. While mobile VR devices are limited by the processing power of the smartphone inserted into the viewer dedicated, VR devices are much more powerful, although, again, dependent on the processing power of the PC used. Most manufacturers recommend using a recent PC with a very recent graphics card. Once that’s taken care of, the much higher bottom line concerning cost can be felt across the board: Close to full HD pixel resolution per eye at an 110 degree field of view (compared to 55-78 degree, and half of the smartphones screen resolution for mobile devices). 90 hz refresh rate compared to 60 hz on mobile devices. Tracks your head and hands in VR space, allowing for upper-body range of motion. The HTC Vive even tracks free-range motion in a 3 by 3 meter area via infrared sensor. This literally let’s you walk around in VR space (until you bump into a wall or trip over the headset’s wires). The high-powered GPU lets the user access AAA quality realtime 3D environments as we know them from AAA quality computer games. Positional Tracking, Head Tracking, Hand Tracking It is great and surprisingly accurate, especially the HTC Vive’s. However, it can be a pain to set up, won’t work in just any setting, and can grow inaccurate even within a single user session. On top of that, a major pitfall is that positional user input is not something that can casually be handled without a lot of experience in at least 3D programming. A good developer and solid user experience design are a must have to turn this into a usable end-user experience. All major SDKs (Oculus, Vive, Playstation) feature different approaches on how to implement these controllers, which means double or triple programming efforts to roll out VR content using tracked user input on all devices at the same time. This is a good indication of where the VR ecosystem is at as a whole: still at its first device generation, which is not at all standardized across devices and mainly targeted at tech-savyy users. That being said, there’s about four times as many new VR devices announced for each following year than have been released the year before—so hardware and software quality are bound to improve and will become feasible for less tech-savvy users.  What about audience size? Let’s stick to established terminology: dedicated VR devices are just that—dedicated. They require an initial investment close to 2,000 dollars to get you started and (especially the Vive) require a dedicated physical space where the motion trackers can be set up. The whole setup and operating process is very techy, and each system you want to support has its own ins and outs. These requirements cut down the potential audience size tremendously. The number of units sold to date does not exceed 1.6 million devices,many of which are owned by gamers (more on that later). Playstation VR has sold 0.75 million alone. This deserves special consideration because the Playstation platform traditionally is very closed to developers with strict in place to gain access. On top of that, Sony has specifically stated that it wants game content for its users rather than media and social VR content.  Mobile vs. Dedicated devices – what’s the difference? It is very telling that through Playstation VR, most dedicated units sold have been shipped to gamers. This means that content for those devices will go beyond mere VR photo and video. Instead, highly immersive real-time 3D environments are indicative of high production values and costs, such as the IKEA VR Showroom and the Audi VR Showroom. Agencies and game studios work on these experiences for months at a time and then present them to a larger audience in a fixed setting,that is, at a store. This is “dedicated” VR concerning all aspects: hardware, software, use case, and setting. There’s nothing casual about it. Content production is complicated and takes a lot of time and money because any VR content for dedicated devices is going to be compared to triple-A games. There might be a niche for dedicated VR feature films but, so far, there are no case studies or white papers indicating that watching a 60+ minute film without much interaction in it will be something a lot of users desire. To summarize, if you want to create VR games and have a decent budget and skilled team available, dedicated VR devices are just the place for you, even if the group of potential buyers of what you create is well below 1 million people—unless you happen to be a registered Playstation developer, in which case, it’s 1.6 million people. Another field where dedicated devices make sense is in advertising, where a still decent budget and a small team can create something that can keep up with a triple-A game at least in a short session. For everybody else, mobile VR devices are going to where it is happening, and you can be content with the fact that your audience will also be there.  Read on about tools to create VR content and how you can go about optimizing your audience reach in the next post. Sources: [1] http://venturebeat.com/2017/02/04/superdata-vrs-breakout-2016-saw-6-3-million-headsets-shipped/ [2] http://www.digitaltrends.com/virtual-reality/oculus-rift-vs-htc-vive/  About the Author Andreas Zeitler is thefounder and CEO at Vuframe. He’s been working with Augmented & Virtual Reality on a daily basis for the past 8 years. Vuframe’s mission is to democratize AR & VR by removing the tech barrier for everyone.
Read more
  • 0
  • 0
  • 2826

article-image-what-skill-sets-do-you-need-earn-more
Packt Publishing
24 Jul 2015
3 min read
Save for later

What skill sets do you need to earn more?

Packt Publishing
24 Jul 2015
3 min read
In the 2015 Skill Up Survey, Packt talked to more than 20,000 people who work in IT globally to identify what skills are valued in technical roles and what trends are changing and emerging. More than 6,000 of those worked in Web Development and provided us with some great insight into how skills are rated across multiple industries, job roles and experience levels. The world of web development constantly changes, and can be highly competitive, so we wanted to find out what industries are best for those just entering the market. We also discovered which technologies are proving to be most popular, where can you earn the best salaries and what skills you need to get that pay rise. We segmented our respondees into different types of developers and now know which developers are earning the highest average salaries, and which type of developers come second. We also know who earns the least, but at just under a $60k salary average, it’s still a great market to go into! If you’re about to move into your first role in Web Development our report can tell you which industry pays the most for those with less experience. Should you go for a role in Entertainment/Media, Government, Health or Finance? At the other end of the scale, those who have the longest experience tend to move into consulting, although the Government sector has its fair share of longer term experience, which is not all that surprising. Our survey results also gave an indication of where those in the middle of their career are employed. So what will be your next move? We wanted to test some assumptions as well. Such as: Is Angular 2.0 about to take over? How valuable is it to be a full stack developer? Are emerging economies taking a slice of the web pie? As expected, some of the results surprised us, and some confirmed that certain skills are becoming crucial to maintaining salary levels during your career. In terms of what technology skills you need to either enter the market or build on your experience, the top technologies are JavaScript, AngularJS and Python, so you’ll need to make sure you’re on top of these skill sets. Read the rest of the report to see what skills you need to build on and which technologies are poised to take web development by storm so you can get ahead of the competition. Click here to download the full report
Read more
  • 0
  • 0
  • 2820
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-stop-thinking-components
Eduard Kyvenko
11 Nov 2016
5 min read
Save for later

Stop thinking in components

Eduard Kyvenko
11 Nov 2016
5 min read
In this post, you will get a high-level overview of the theoretical part behind modern architecture of client-side Web applications, establish basic terminology, and define the main problem of Component-based software engineering. You will be able to translate that knowledge on to real examples of component-based technologies, such as React and Angular, to see how they are different from Elm. Components: the building blocks of modern Web UI Component-based development is lingua franca of the modern frontend programming. We have been using this term for years, but what is a component exactly? The definition of Component From Computer Science we know that Component is a software package or a module which encapsulates related functions or data. Component, as a design pattern, originates from the traditional Object Oriented Programming. Component-based Web application consists of multiple components and some sort of infrastructure for component communication. In a real world it is usually represented by a class, for example,  Angular 2 Component or React Component. Please, don’t be confused by Web Components. Object Oriented Programming and Components Traditional Object Oriented Programming, as a design pattern, focuses on building class hierarchy with a chain of inheritance. Despite the fact, that Component-based software engineering is still usingclasses, it prefers the composition over inheritance. Modern Web applications usually implement an architecture that includes Components and an infrastructure for their communication. The problem of distributed state Components rely on classes, which contain mutable state as properties. JavaScript is well known for the reputation of one of the mainstream asynchronous programming languages. Maintaining distributed state with many relations is hard when every piece of data is scattered around the components and it’s not getting easier when code is executed asynchronously. Asynchronous state management is a hard task, which is getting only worse with the scale. Many of JavaScript libraries and ES2015 spec attempt to provide an experience of synchronous-looking code, while we’re actually writing asynchronous code, which often leads to a situation where we have a problem with promises. Many design patterns are aiming on solving the problem of Component Communication with distributed state. Observer pattern is one of the traditional ways to establish Component communication in Component-based application. From your frontend experience, you might be familiar with the three main types of Observer pattern: Event Emitter EventEmitter from Node.js or Events mixin from Backbone.js library. Pub Sub Not as widely used, because it requires you to follow certain architecture, such as  PubSubJS Signal(often referred to as a Stream) See rxjs or xstream Redux was a big game changer in state management with React, but any great power comes with responsibility. A lot of people have been struggling with Redux, which ledto a logical conclusion in the post by Dan Abramow You Might Not Need Redux. The problem is that the functional patterns are harder to implement in a language that has a strong imperative side. Elm Architecture Elm application does not have components. Primary building blocks are pure functions. Minimal example of an application consists of three main parts: Update function This is a function where you handle state changes; it accepts the current state and returns the next state. View function Produces the output of your program; it accepts only one argument, which is the state of your application. Initial state of the model This function simply returns the initial state of your application. Technically, the whole Elm application is a component. It exposes a Signal or Reactive Stream API for interpretation with the outside world or other components. State is a unified storage of all data in the application. Every stateful Elm application is built with a module called Html.App. As of today, Elm’s main focus is on applications that produce a DOM tree as an output. Stop thinking about components Component communication is a term, which refers to an implementation of a certain infrastructure between two or more inter-operating classes with encapsulated state. Elm Architecture composes update functions to form a pipeline for changing one unified immutable state, which is composed out of functions that produce the initial state. Code splitting is not the same as componentization, you can have as many modules as you want. You can check out an example of module composition in Elm on GitHub; it implements the so-called Fractal Architecture. Practical advice on scaling Elm applications Code organization should not be prioritized over business logic. Code splitting is rather harmful during the early stages of development. Omit type signatures at the beginning. Without specific signatures you will have more freedom while experimenting. Elm code is extremely easy to refactor, so you can focus on getting stuff done and clean-up later. When refactoring, try to do one thing at a time; the compiler is extremely useful, but if you don’t define a limited scope for the refactoring process, you will get overwhelmed by compiler errors. Elm is offering an alternative Component-based development is lingua franca of the modern frontend programming. Elm Architecture is a polished design pattern, which offers a lot of interesting ideas, and some of them are drastically different from what we are used to. While component-based architectures, written in imperative languages, have their strong sides, Elm can provide a better way for implementing asynchronous flows. Function-based business logic is way more reliable and easier to test. About the author Eduard Kyvenko is Front-End Lead at Dinero. He has been working with Elm for over half a year and has built Tax return and financial statements App for Dinero. You can find him on Github at @halfzebra.
Read more
  • 0
  • 0
  • 2819

article-image-a-short-video-introduction-to-gulp
Maximilian Schmitt
12 Dec 2014
1 min read
Save for later

A short introduction to Gulp (video)

Maximilian Schmitt
12 Dec 2014
1 min read
  About The Author Maximilian Schmitt is a full-time university student with a passion for web technologies and JavaScript. Tools he uses daily for development include gulp, Browserify, Node.js, AngularJS, and Express. When he's not working on development projects, he shares what he has learned through his blog and the occasional screencast on YouTube. Max recently co-authored "Developing a gulp Edge". You can find all of Max's projects on his website at maximilianschmitt.me.
Read more
  • 0
  • 0
  • 2819

article-image-4-reasons-why-openstack-tokyo-proved-openstack-has-come-age
Richard Gall
07 Dec 2015
6 min read
Save for later

4 Reasons Why OpenStack Tokyo proved that OpenStack has Come of Age

Richard Gall
07 Dec 2015
6 min read
If OpenStack had always appeared to be the trendy cloud solution, acclaimed and praised yet never quite taking hold over the software world’s popular imagination, the OpenStack Summit in Tokyo at the end of October marked its maturity. While it may have spent the past few years defining and setting the standard for what modern organizations could do with Cloud from the fringes, as 2015 draws to a close there’s no doubting that it is now a core part of the mainstream. This can only be good for an organization that has its sights set on becoming the key player in the market, but it also means new responsibilities and changing expectations. Maturity often means dealing with the crushing reality of ‘real life’, but OpenStack proved in Tokyo that this doesn’t have to mean you become boring… Here’s 4 reasons why the OpenStack Summit proved that OpenStack has now come of age. OpenStack Certification The announcement that The OpenStack Foundation, the non-profit organization that drives the OpenStack project, is going to set up certificated training for Cloud admins is a distinctive mark of maturity that is likely to cement OpenStack’s position within the market. Mark Collier, the foundation’s COO, explains "As OpenStack matures and enters bigger and bigger markets… what people typically want to do is really start to take the software and put it to use… They just want to operate it - and so that's where we see the biggest impact in terms of skills.” The certification signals that the organization is trying to address a difficult and challenging reality – that there is a talent gap of knowledgeable and experienced Cloud admins. Of course, tackling this will be crucial to OpenStack both expanding and consolidating its use. Collier’s suggestion that this certification (which is due to be rolled out in 2016) is the first of many emphasises that OpenStack isn’t drawing away from its versatility as a cloud platform but is instead harnessing and refining it, so users can have more confidence and greater purpose. Project Navigator Project Navigator neatly follows on from the OpenStack Foundation’s certification, as it is part of the same thematic trend – OpenStack’s movement towards giving users more control over their software. Essentially it will help users identify their key needs, and direct them towards products and services that most suit their needs. Built on a wealth of user data about what types of projects are built on what software, Project Navigator delivers really useful information in one accessible interface/dashboard. Whoever uses it will be able to see what other people are doing, and can then base their own decisions on a wider consensus of what works with what. Project Navigator demonstrates that OpenStack is acutely aware of the huge range of its users’ needs, and, indeed, the potentially confusing scope of possibilities that OpenStack offers. Just as the certification provides a way of defining best practices and emphasising the core features of OpenStack from different user’s perspectives, Project Navigator similarly helps to define the different ways in which OpenStack can be used. But what’s most impressive about the project is that it’s managed to retain the Open Source values of openness and creativity. As Collier explained, because Project Navigator is driven by user data “we're not really making a judgment call. It's more just a reflection of where the market is”. What we have then, is a platform that speaks to the concerns of high-level technology strategists, making key organizational decisions, that doesn’t dictate how something should be done, but instead simply outlines what people are doing now. OpenStack Liberty OpenStack Liberty was at the centre of October’s Summit. If the Summit represents a watershed moment in OpenStack’s lifespan, the 12th version of the cloud solution expertly demonstrates and underlines that the organization is listening to users and committed to tackling the key challenges that lie ahead. Adding role-based accessed to Neutron (OpenStack’s networking project that is often described as unnecessarily complex by critics) and Heat will, as ZDNet put it, “provide fine controls over security settings at all levels of the network and API.” The issue of scalability, too appears to be being tackled by Liberty, as the new version of cells set to become “the default way in which Nova is deployed”. Essentially, scaling will simply become a case of adding new cells to the single cell that makes up a Nova instance. (If you want a comprehensive look at what’s new in Liberty, Mirantis helpfully run through every single new feature, which you can read here.) Liberty has been described as a move towards a ‘Big Tent’ model, whereby projects are brought together to become part of a more coherent whole. As Jonathan Bryce, the OpenStack Foundation’s Executive Director, explains, "With the Big Tent shift, it has allowed people within the OpenStack community to select different focus areas, so we're seeing a lot of innovation." Again, this move lets OpenStack emphasise the sheer range of possibilities on offer, while still putting forward a singular vision. Indeed, perhaps Bryce is being a little disingenuous – yes, it’s about letting people focus on what they want, but it’s likely that over time innovation will be driven by people looking at how different projects intersect and work together in new ways. Going International – OpenStack is growing outside of the U.S. It’s significant that this October’s summit was held in Tokyo. Although the OpenStack user base is predominantly located in North America, with 44% of users based there, it’s worth noting that 28% of users are based in Asia, with Europe on 22%. Clearly, there is a lot of room for growth into these areas, but announcements such as the training certification and Project Navigator, positioned alongside improvements to the core OpenStack offering have been designed to do exactly that. Yahoo Japan provides a great case study of OpenStack being used on a large-scale outside of the U.S. Yahoo claim to be running more than 50,000 virtual machines on OpenStack – as Mark Collier points out, this means that there are just “A team of six running 10 billion page views”. It’s true that a single success story shouldn’t necessarily be taken as evidence of some wider trend – but the fact that OpenStack are so interested in talking about it provides a clear indication that they are looking for new stories to promote the project, which will help them reach out and engage new people. For a comprehensive look at OpenStack, pick up the latest edition of OpenStack Cloud Computing Cookbook today. Packed with more than 110 recipes, it helps you properly get to grips with the platform so you can harness the opportunities it creates for more productive, collaborative and efficient working.
Read more
  • 0
  • 0
  • 2816

article-image-you-want-job-publishing-please-set-fire-unicorn-c
Sarah
22 Jun 2014
5 min read
Save for later

You Want a Job in Publishing? Please Set Fire to this Unicorn in C#.

Sarah
22 Jun 2014
5 min read
Total immersion in a tech puzzle that's over your head. That's part of the Packt induction policy. Even those in an editorial role are expected to do battle with some code to get a feel for why we make the books we make. When I joined the commissioning team I'd heard rumours that the current task involved frontend web development. Now, English major I may be, but I've been building sites since the first time my rural convent school took away our paper keyboards and let us loose on the real thing. (True story.) I apprenticed in frames and div.gifs thankfully lost in the Geocities extinction event. CSS or Java? Maybe Sass or JQuery? I was smug. I was on this. Assignment: "This is Unity. Build a game. You have four days." Hang on. What?   The last time I wrote a computer game it was a text adventure in TADS. Turns out amateur game dev technology has moved on somewhat since then. There's nothing like an open brief in a technology you've never even installed before to make you feel cool and in control in a new job. But that was the point, of course. Four days to read, Google, innovate, or pray one's way out of what business types like to call a "pain point". So this is a quick précis of my 32-hour journey from mortifying ignorance to relative success. No, I didn't become the next Flappy Bird millionaire. But I wrote a game, I learned some C#, and I gained a new appreciation for how valuable the guidance of a good author can be as part of the vast toolkit we now have at our fingertips when learning new software. My completed game had a complicated narrative. Day one: deciding what kind of game to make "Make a game" is a really mean instruction. (Yes, boss. I'm standing by that.) "Make an FPS." "Make a pong clone." "Make a game where you're a Tetris block trapped in a Beckett play." All of these are problems to be solved. But "I want to make a game" is about as clear a motivation as "I want to be rich". There are a lot of missing steps there. And it can lead to delusions of scale. Four whole days? I'll write an MMO! I wasted a morning on daydreaming before panicking at lunch and deciding on a side-scroller, on the reasonable logic that those have a beginning, a middle, and an end. I knew from the start that I didn't just want to copy and paste, but I also knew that I couldn't afford to be too precious about my plans before learning the tool. The sheer volume of information on Unity out there is overwhelming. Eventually I started with a book on 2D Games in Unity. By the end of the day, I had a basic game. It wasn't my game, but I'd learned enough along the way to start thinking about what I could do with Unity. Day two: learning the code By mid-morning of day two I'd hit a block. I don't know C#. I've never programmed in C. But if I wanted to do this properly I was going to have to write my own code. Terry Norton wrote us a book on learning C# with unity. For me, a day out working with one clear voice explaining the core concepts before I experimented on my own was exactly what I needed. Day two was given over to learning how to build a state machine from Norton's book. State machines give one strong and flexible control over the narrative of a game. If nothing else ever comes from this whole exercise that is a genuinely cool thing to be able to do in Unity. Eight hours later I had a much better feel for what the engine could do and how the language worked. Day three: everything is terrible Day three is the Wednesday of which I do not speak. Where did the walls go? Everything is pink. Why don't you go left when I tell you to go left? And let's not even get started on my abortive attempts to simultaneously learn to model game objects in Maya. For one hour it looked like all I had to show for the week's work was a single grey sphere that would roll off the back of a platform and then fall through infinite virtual space until my countdown timer triggered a change of state and it all began again. This was an even worse game than the Beckett-Tetris idea. Day four: bringing it together Even though day three was a nightmare, there was a structure to the horror. Because I'd spent Monday learning the interface and Tuesday building a state machine, I had some idea of where the problems lay and what the solutions might look like, even if I couldn't solve them alone. That's where the brilliance of tech online communities comes in, and the Unity community is pretty special. To my astonishment, step-by-step, I fixed each element with the help of my books, the documentation, and Unity Answers. I ended up with a game that worked. Day five: a lick of paint I cheated, obviously. On Friday I came in early and stole a couple of extra hours to swap my polygons for some sketched sprites and add some splash pages. Now my game worked and it was pretty. Check it out: Green orbs increase speed, red slow you down, the waterfall douses the flame. Complex. It works. It's playable. If I have the time there's room to extend it to more levels. I even incidentally learned some extra skills (like animating the sprites properly) and particle streams for extra flair. Bursting with pride I showed it to our Category Manager for Game Dev. He showed me Unicorn Dash. That game is better than my game. Well, you can't win 'em all.
Read more
  • 0
  • 0
  • 2805
article-image-11-ways-developers-can-make-impact-management
Hari Vignesh
12 Apr 2017
8 min read
Save for later

11 ways developers can make an impact on management

Hari Vignesh
12 Apr 2017
8 min read
The annual closing for your organization is coming. Is your increment satisfactory? Did someone outperform you?  Did they outperform you without working as hard as you are? Have you ever worried how to impress your boss or your management? Are you earning credits for your work? If you don’t have a proper answer, you’re in the right place. In this blog post, I'll share a few hacks that'll keep you on the road for impressing your management.  Pay Attention to the Minor Details This is the most crucial rule. I have seen many supersmart and talented programmers write very crisp and clean code, and they’ve done it fast. But sometimes, they miss out minor requirement details. These minor details can include applying formatting to a field, validating a date field for proper input, formatting a table, or even a font on a page. If I were your project manager or lead, I wouldn't be happy with you if you failed to pay attention to these minor details. It's never good to repeat work over and over again. Rethink Before You Deliver One of my bad habits is not reviewing my work. Even after I write an article, I hate to review it. You must make sure to review (sanity or unit test) your work before you deliver it and see how you can improve it. Sometimes, thisis difficult due to tight deadlines, and in that case, I would hold project managers and team leaders accountable for that. If you need extra time to review it, let your boss know. If he does not give you extra time, then you do not need to worry about it. Choose Design Patterns Wisely Design patterns are good for undertaking TTD (Test-Driven Development). This reduces the cost for testing, and the chances of the build being stable are high. Adapting to the right tech stack, choosing the right framework, calculating risks on migration, and delivering flexible and maintainable code, all help in building trust with your management. In short, you can impress them by delivering products with the latest and scalable tech, and also by reducing the maintenance cost. Show Up Excited and Eager to Learn Every Day This one quality is lacking with many employees and especially among fresh grads. All of their excitement on their work seems to last only for a few months. This really impacts their career growth in a bad way and it definitely creates a negative impression among management. How do you solve this problem? The ethical solution is to pick a job that doesn't feel like a job. You should love to do it every day. In short, take up your passion as your job. If it's too late for that, then make up your mind and learn about your domain every day. Remember, knowledge is power. The more knowledgeable you are, the more you will be respected by your management, and it will help in cultivating trust. There is a Japanese word called “Kaizen," which means continuous improvement. This philosophy suits all things in life. Continuous improvements will impact your career in a positive way. Clear Understanding of Roles and Responsibilities Most people fail to keep up with their roles and responsibilities. The roles and responsibilities are like promises made to the company. Employees should never forget these or be so casual about it. These promises determine your daily activity, and remember that the organization is paying for that. So it’s time that we follow those promises to the dot. For developers especially, in most of the companies, they will have the following responsibilities: Writing clean and maintainable code Reviewing the code regularly and following standards Using optimal solutions Constant improvement in their performance Coordinating with co-workers to meet deliverables Contributing more hours when requested by the company These are some of the basic responsibilities that I’ve seen in many JDs. Mostly, following all of them, or at least 90 percent of them, will create a hugely positive impact. Understanding Client or Product Requirements (end to end) If you’re working for a services firm, you need to understand the client’s requirements to the dot. If the requirements are coming from multilevel hierarchy, set up a meet with your peers, vet your understanding, and get an approved document regarding the requirements. It’s a very good practice to document every conversation you've had in these meeting (minutes of meeting, or MOM) and communicate all of their conversations via e-mail (just to have proof with time stamping) to avoid getting into unnecessary politics within the firm. If you’re working for a product-based company, it’s mandatory to understand the product’s needs and purpose. And the management will love it if you come up with doubts and issues regarding the requirements because it will help them to refine the items on their plate better. So before beginning your work, clarify most of your doubts and vet the solution that you’re planning to implement. Sticking to the Time As a developer, you’ll be delivering plenty of items throughout your career. It’s critical to stick to the time promised in order to maintain your trust and reputation. So before starting your work, it’s essential to negotiate the time period regarding the deliverables. If the timeline for what the management is suggesting is impossible, it’s your responsibility to make them understand and compromise on respective aspects. In short, delivering things on or within the promised time is very critical to building trust with your management. If you’re good at delivering things on time, then the management will definitely expose you to multiple opportunities and benefits.  Understanding the Psychology of Your Peers, Management, and Clients You must understandwhat makes your management happy. For that you need to understand their behavior. It can be clients, peers, or your management, but you can understand their character by just talking to them on a regular basis.The equation is simple. Make the client happy or satisfied, and you will in turn make the management happy. If it’s a product-based company, make the users and customers happy; this will definitely make your management happy.  To make your client or customer happy, you need to know what kind of person they are and what they need. To know what they need, you need to establish a good rapport and talk to them frequently. So give them what they need and the equation executes on its own. Demand Responsibility Management always encourages and loves people who can take up any of the responsibilities available on their plate. So if don't have much work, or if you feel you’re not occupied, feel free to ask your management for additional responsibilities (sometimes, managers will keep you free to see how you’re doing — are you demanding work or enjoying the benefits as it is?). If there are any vacant roles, make a request with your management and help them understand why you will be the right fit for it. It’s all up to you on how you play your card. Feed Them with Ideas Regularly The management isn't the only idea vending machine. You also play an important part in it. Sharing continuous ideas and brainstorming ways to improve the product or deliverables will showcase your management skills to your management team. Also, if you’ve a wonderful idea and management has decided to implement it, then play your card in showcasing that it was your idea. Plus, if your ideas get approval, it will come only to you for implementation. So make sure that your idea is totally implementable. Show Your Growth on Professional Networks If you’re not on professional networks like LinkedIn and AngelList, please start creating your profiles today. For every professional achievement within the company or outside, record them constantly and make sure that your management team sees it. This will constantly remind them of your growth, and will prevent them from forgetting you and your contributions. Remember: You don’t just have a job, you have an opportunity. You have a chance to prove yourself. Show up hungry. Make it matter. Thank you for spending your valuable time reading this article. I hope you’ve gained few tips to take home and practice. If you’ve liked this article, please share.  About the author  Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter at @HariofSpades. 
Read more
  • 0
  • 0
  • 2800

article-image-what-it-learn-language-coding-bootcamp
Mary Gualtieri
12 Apr 2017
5 min read
Save for later

What is it like to learn a language at a coding bootcamp?

Mary Gualtieri
12 Apr 2017
5 min read
Why I joined a coding bootcamp I was 25. I couldn’t figure out the direction for my life. I had several failed attempts at a higher education,but I could never find my happiness at a university. I began to search for alternative ways to get an education that wouldn’t take me four years to get into the workforce. I finally came across Wyncode Academy, a 9week coding bootcamp. I read that it would allow me to enter the workforce and make a decent salary without investing years to get me there. This was the perfect solution for me.  Did I know anything about a coding language going into this? Only the bare basics.  Choosing the direction of whether to go to a traditional school or an alternative school can be a daunting task. You really have to decide what is the best path for you. A bootcamp was the best path for ME. It was a hard experience. It was not easy. I had to get out of my comfort zone. I had to shut off my world for 9weeks—eating, breathing, and sleeping code. My family would only see me for five minutes when I would come out of my room to get a cup of coffee and leave to go to Wyncode to begin my day. But, it was a small sacrifice and a short season in my life that would eventually end. What was my experience like?  Beginning a coding bootcamp was like beginning middle school all over again. I walked into a room of twenty-five people, from 20 year olds to 55. We looked at each other, not knowing what to say to one another.  We were from all different walks of life. Some of us were lawyers, some of us were business owners, and some of us were looking to do something different with our careers. Mostly, we were all looking for a change and to breathe new life into us.  Our days would begin early. We would get to our classroom around 7:30 in the morning to begin coding. If one of us didn’t understand a homework problem, we had someone who knew how to solve the problem. We always had someone that would explain the why and the how to get to a solution.  At around 10 in the morning, we would begin our lectures on various topics. In the afternoon, we had some type of activity to apply what we had just learned, like a hackathon or a live coding session. We would end our lectures around 4:00 in the afternoon. After that time was where the real learning and my curiosity for coding began. I had all this knowledge I had just learned, but now I had to apply it on my own.  Toward the end of my bootcamp experience, it was finally time to apply what I had learned from the previous weeks. We worked in teams to build full-stack applications and give real pitches to judges. It simulated what it would be like to pitch an idea to real investors.  Even though my time at my bootcamp ended, my time as a web developer was just beginning. I learned enough to keep my curiosity there and to keep going. On our last day of our bootcamp, we were told to keep the ABCs—Always Be Coding. That rang true for me.  During my bootcamp tenure, I learned Ruby on Rails, HTML, CSS, and JavaScript as my foundation. It was a good foundation to begin with because I was taught fundamentals and theories that could be applied to different languages. My curiosity, in the end, was what pushed me to seek out more in coding languages.After my last day of my bootcamp, I committed myself to being a JavaScript developer. I began learning other JavaScript libraries that would not only make a desirable candidate to hire, but also make me happy and interested in what I was doing.  Fast-forward a couple of years later. I am now the lead web developer for a small communications company. I’m still a constant learner, but now I get to invest into the people I get to help lead. I look forward to going to work everyday, and it all started with going to a coding bootcamp.  The best part of my bootcamp experience was the confidence I gained to succeed in the workforce. I believe in myself and finally have the “I can” attitude that I had been lacking. Doors have opened for me that I didn’t know I wanted open. I gained an unbeknownst family that shared a great experience with me. I gained a business acumen that has set me apart from the person next to me.  Going to a coding bootcamp was one of the best decisions I have ever made. My only wish is I would have done it sooner.  About the author Mary Gualtieri is a full stack web developer and web designer who enjoys all aspects of the Web and creating a pleasant user experience. Web development, specifically frontend development, is an interest of hers because it challenges her to think outside the box and solve problems, all while constantly learning. She can be found on GitHub as MaryGualtieri. 
Read more
  • 0
  • 0
  • 2773

article-image-5-things-defined-tech-2015
Richard Gall
18 Dec 2015
5 min read
Save for later

5 Things That Defined Tech in 2015

Richard Gall
18 Dec 2015
5 min read
2015 has been the year that the future of tech has become more clearly defined. For us at Packt, it’s been a year of reflection and analysis. We’ve been finding out more about the lives of our customers, looking at what’s driven their careers and technical expertise over the last decade and looking ahead towards the challenges not only of 2016 but also the decade ahead. Our Skill Up Skills and Salary Reports were at the centre of this, and have given us a fresh perspective on the lives that we build around the tech we use everyday. But we certainly don’t want to stop learning about what makes you tick – and what keeps you up at night. That’s why we’ve ended the Year with our very first Year in Review. It’s a chance for us to join the dots between the year that’s been and gone and the one that’s just ahead. So what was important? Take a look at some of the key findings, and then read the full report yourself. Python, Python, Python What better place to begin than with Python. Our end of year survey found that not only was Python the fastest growing topic of 2015 – being the most adopted programming language on the planet – it is also going to be the fastest growing topic of 2016. Surely this alone underlines that it is now the language of software par excellence. That’s not to say it’s superior to other languages – there are, of course, plenty of reasons to not use Python – but its versatility and ease of use means it’s an accessible route into the software world, a solution to a vast range of contemporary problems, from building websites to analysing data. Specialized Programming Languages If Python has reached into just about every corner of the programming world, a curious counterpoint is the emergence of more specialized programming languages. In many ways, these languages share a number of Python’s distinctive characteristics. Languages like Go (which was far and away the winner when it came to languages people wanted to learn), Julia and Clojure all offer a level of control, their clear and expressive syntax perfect for complex problem solving and high performance. As the programming world becomes obsessed with speed and performance, it’s clear that these languages will be sticking around for some time. Bigger, Better, Faster Data The next generation of Big Data and Data Science is already here. You probably already knew that – but the topics that emerged from our survey results indicate the direction in which the world of data is heading. Deep Learning was the most notable trend that is on the agenda for 2016. Clearly, nowMachine Learning has become embedded in our everyday lives (professional and personal), 2016 is going to be all about creating even more sophisticated algorithms that can produce detailed levels of insight that would have been unimaginable a few years ago. Alongside this – perhaps as a corollary to this next-level machine learning – is the movement towards rapid, or even real-time Big Data processing. Tools such as Apache Kafka, Spark and Mesos all point towards this trend, all likely to become key tools in our Big Data infrastructures not only over the next 12 months but also over the next few years. Internet of Things mightFinally Become a Reality Even 12 months ago the Internet of Things looked like little more than a twinkle in the eye of a futurologist. Today it has finally taken form – we’re not there yet, but it’s starting to look like something that’s going to have a real impact not only on the way we work, but also the way we understand software’s relationship to the world around us. The growth of wearables, and the rise of applications connected to real-world objects (we love home automation), are the first step towards a world that is entirely connected. It’s important to remember this is going to have a huge impact on everyone working in tech – from the developers creating applications to the analysts charged with harnessing this second explosion of data. The Future of JavaScript Many web developers we spoke to listed AngularJS as the most useful things they learned in 2015 – many more also said they planned on learning it in 2016. The impact of the eagerly-awaited Angular 2.0 remains to be seen, but it’s likely that the best way to prepare for the next generation of Angular is by getting to grips with Angular now! It would be unwise to see Angular’s dominance in isolation – it’s the growth of full-stack development that’s been crucial in 2015, and something that is going to shape the next 12 months in web development. Node.js featured as a key topic for many of our customers, highlighting that innovation in web development appears to be driven by tools that provide new ways of working with JavaScript. Although Node and Angular have a real hold when it comes to JavaScript, we should also pay attention to newer frameworks like React.js and Meteor. These are frameworks that are tackling the complexity and heft of today’s websites and applications through radical simplicity – if you’re a web developer, you cannot afford to ignore them. Download our Year in Review and explore our key findings in more detail. Then start exploring the topics and tools that you need to learn by taking advantage of our huge $5 offer!
Read more
  • 0
  • 0
  • 2773
article-image-kubernetes-googles-open-docker-orchestration-engine
Ryan Richard
13 Feb 2015
6 min read
Save for later

Understanding Kubernetes: Google’s Open Docker Orchestration Engine

Ryan Richard
13 Feb 2015
6 min read
In April 2014 the first Dockercon took place to a packed house. It became clear that Docker has the right recipe to become a game changer, but one thing was missing: orchestration.  Many companies were attempting to answer the question “How do I run hundreds or thousands of containers cross my infrastructure”? A number of solutions emerged that week: Kubernetes from Google, Geard from Red Hat, fleet from CoreOS, deis, flynn, ad infinitum. Even today there are well over 20 open source solutions for this problem, but one has emerged as an early leader: Kubernetes (kubernetes.io). Besides being built by Google, it has a few features that make it the most interesting solution: Pods, Labels and Services. We’ll review these features in this blog. Along with the entire Docker ecosystem, Kubernetes is written in Go, open source and under heavy development. As of today, it can be deployed on GCE, Rackspace, VMware, Azure, AWS, Digital Ocean, Vagrant and others with scripts located in the official repository (https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster). Deploying Kubernetes is generally done via SaltStack but there are a number deployment options for CoreOS as well. Kubernetes Paradigms Let’s take a look at pods, labels and services. Pods Pods are the primary unit that Kubernetes will schedule into your cluster. A pod may consist of 1 or more containers. If you define more than 1 container they are guaranteed to be co-located on a system allowing you to share local volumes and networking between the containers. Here is an example or a pod definition with 1 container running a website, presumably with an application already in the image: (These specs are from the original API, which is under heavy development and will change.) <code - json> { "id": “mysite", "kind": "Pod", "apiVersion": "v1beta1", "desiredState": { "manifest": { "version": "v1beta1", "id": “mysite", "containers": [{ "name": “mysite", "image": “user/mysite", "cpu": 100, "ports": [{ "containerPort": 80 }] }] } }, "labels": { "name": “mysite" } } </code - json> In reality you probably want more than 1 of these containers running in case of a node failure or to help with load. This is the where the ReplicationController paradigm comes in. It allows a user to run multiple replicas of the same pod. Data is not shared between replicas but instead allows for many instances of a pod to be scheduled in the cluster. <code - json> { "id": “mysiteController", "kind": "ReplicationController", "apiVersion": "v1beta1", "desiredState": { "replicas": 2, "replicaSelector": {"name": “mysite"}, "podTemplate": { "desiredState": { "manifest": { "version": "v1beta1", "id": “mysiteController", "containers": [{ "name": “mysite", "image": “user/mysite", "cpu": 100, "ports": [{"containerPort": 80}] }] } }, "labels": {"name": “mysite"} }}, "labels": {"name": “mysite"} } </code – json> In the above template we took the same pod but converted it to a ReplicationController. The “replicas” directive says that we want 2 of these pods running all of the time. Increasing the number of containers is as simple as raising the replica value. Labels Conceptually, labels are similar to standard metadata tags except that they are arbitrary key/value pairs. If want to label your pod “environment: staging” or “name: redis-slave” or both, go right ahead. Labels are primarily used by services to build powerful internal load balancing proxies, but can also be filter output from the API. Services Services are user-defined “load balancers” that are aware of the container locations and their labels. When a user creates a service, a proxy will be created on the Kubernetes nodes that will seamlessly proxy to any container that has the selected labels assigned. <code - json> { "id": “mysite", "kind": "Service", "apiVersion": "v1beta1", "port": 10000, "selector": { "name": “mysite" }, "labels": { "name": “mysite" } } </code – json> This is a basic example that creates a service that listens on port 10000, but will proxy to any pod that fulfills the “selector” requirements of “name: mysite”. If you have 1 container running, it will get all the traffic, if you have 3, they will each receive traffic. If you grow or shrink the number of containers, the proxies will be aware and balance accordingly. Not all of these concepts are unique to Kubernetes, but it brings them together seamlessly. The future is also interesting for Kubernetes because it can act as a broker to the cloud provider for your containers. Need a static IP for a special pod? It could get that from the cloud provider. Need another server for additional resources? It could provision one and add it to the cluster. Google Container Engine It wasn’t a far stretch to see that if this project was successful then Google would run it as a service for their cloud. Indeed they’ve announced the Google Container Service based on Kubernetes (https://cloud.google.com/container-engine/). This also marks the first time Google has built a tool in the open and productized it. A successful product may mean that we see more day 1 open source projects from Google, which is certainly intriguing. AWS and Docker Orchestration Amazon announced their container orchestration service at re:invent. This blog wouldn’t be complete without a quick comparison between the two. Amazon allows you to co-locate multiple Docker containers on a single host, but the similarities with Kubernetes stop there. Their container service is proprietary, which isn’t a surprise. They’re using links to connect containers on the same host, but there is no mention of smart proxies inside the system. There isn’t a lot of integration with the rest of the AWS services (i.e. load balancing) yet, but I expect that to change pretty quickly. Summary In this post, we touched on why Kubernetes exists, why it’s a unique leader in the pack, a bit on its paradigms and finally a quick comparison the AWS EC2 container service. The EC2 container service will get a lot of attention, but in my opinion Kubernetes is the Docker orchestration technology to beat right now, especially if you value open source. If you’re wondering which direction Docker is heading, make sure to keep an eye out for Docker Host and Docker Cluster. Lastly, I hope you recognize that we are at the beginning stages of a new deployment and operational paradigm that leverages lightweight containers. Expect this space to change and evolve rapidly. For more Docker tutorials and even more insight and analysis, visit our dedicated Docker page - find it here. About the author Ryan Richard is a systems architect at Rackspace with a background in automation and OpenStack. His primary role revolves around research and development of new technologies. He added the initial support for the Rackspace Cloud into the Kubernetes codebase. He can be reached at: @rackninja on Twitter.
Read more
  • 0
  • 0
  • 2766

article-image-business-value-existing-3d-data-age-augmented-and-virtual-reality-part-1
Erich Renz
09 Mar 2017
6 min read
Save for later

The Business Value of Existing 3D Data in the Age of Augmented and Virtual Reality - Part 1

Erich Renz
09 Mar 2017
6 min read
This blog post covers the impact of emerging technologies on many of today’s business models. It is driven by the hypothetical question of “What if I use or reuse my existing 3D data to design, produce, promote or sell a product?” and addresses executives, marketers, sales representatives and public relations managers—those risk-takers and doers who make strategic decisions and decide how products can be marketed and sold to current and future clients in a new way. In this article, we will also refer to business theories and apply them to the realm of 3D, Augmented, and Virtual Reality. Getting the job done with Augmented and Virtual Reality In his latest book Competing Against Luck, Harvard Business School professor Clayton M. Christensen and his co-writers build their theory about innovation and customer choice on the premise that “we all have jobs we need to do that arise in our day-to-day lives and when we do, we hire products or services to get these jobs done.” Practically speaking, if you are a sales manager in a B2B company that is selling industrial gates, there are plenty of options of getting your job done. You could use print brochures to introduce a showpiece to an interested lead. Shortly after the intro you might pull out documents with technical specifications. To round out the conversation, you show a short movie to demonstrate the functionality of your product in a fun, engaging way. In return, your client might be telling you that this is more or less what he needs to get his job done - that is driving the truck smoothly into the garage, parking it safely and not thinking about any further maintenance costs or the safety of the truck while parked. With new technologies on the rise, we can help both our sales manager and his customer to make their working lives easier. Being equipped with a tablet and an app that contains either each product of a company’s product line or a comprehensive product configurator that allows a sales representative to choose between different models, present mechanical systems in real-time and assemble the demanded product in front of the interested lead, shows that customer needs are taken seriously then and there by the organization our sales manager is representing in the field. Together with the lead he will pick or select the product and preview it in Augmented Reality to see if it fits the conditions on-site and readjust if need be. Once he is finished with his visit, he can send out a report to the client and his own back office for documentation. It is almost a no-brainer to mention that this data can be reused for installation purposes and clarify issues before any legal actions are taken (due to wrong installation, missing mounting parts or malfunctioning).  Here is another case. Imagine you were to market properties to prospective buyers. You could perform the job by compiling all the information you want to share on a website that includes floor plans, image galleries and promotional videos. You could also advertise it on one of the many platforms that specialize in selling properties. These are two possible approaches of getting your job done as a real estate marketer.  But what if you reuse the architect’s 3D CAD model and present the future building, located in Madrid, to the family from Belgrade who is really keen on doing a virtual walk through the apartment? All they need is a tablet or a smartphone. We are not talking expensive tech that has to be newly purchased. Immersing your clients in a virtual environment that cannot be entered under normal circumstances because it is expensive, dangerous or impossible to get there or the event is so rare that it is unlikely to easily repeat it is key in getting the job done with Virtual Reality in a sensible way, which is far from being gimmicky or unsustainable.  How do AR/VR technologies affect particular building blocks in a business model? To continue with the argument of how 3D will influence future business models I will refer to Alexander Osterwalder’s and Yves Pigneur’s widely used Business Model Canvas. The canvas helps to visualize how a company generates revenue and how a value proposition is created and delivered to specific customer groups. Value proposition and customer segments are at the heart of each and every business model. 3D data has one big advantage: as long as it sits there it cannot lose any value. Anyone in research and development, sales or marketing can use it. 3D lays the foundation of a common shared language between employees and customers. If you think about your first or next step towards digitization, start with 3D. Seriously.  Value Proposition Meeting the needs of customers is a supreme discipline. Ask yourself “What value do we deliver to the customer?” or “Which one of our customer’s problems are we helping to solve?”. A big plus that comes with Augmented Reality is cost reduction by lessening the number of returning goods and product accessibility. Take for example a prospect customer who is trying to find out if a desired shelf fits into that small spot in her bedroom. Life-size product previews can become a substantial aiding tool in making purchase decisions. If she can access the product any time on her phone and get all the product specific information combined with the visual representation in real-time, it allows her to interact with the product before purchase and to test it upfront. Thus, playfully consuming information with the benefits of AR has the potential to lower the rate of returning goods.  In the visual-driven age of Augmented and Virtual Reality, the buying process shifts from linear buying - based on a mixture of descriptions and product visuals - to a non-linear experience where the consumer accesses and selects the desired products and adapts them to their personal circumstances. That said, virtual product demonstrations are bringing the value proposition closer than ever into the homes of potential buyers. This is an outstanding opportunity for producers to engage with customers on an emotional level that is informative and informal in its essence.  As you can see, opportunity abounds. In part two of this article, we will take a look at customer segments, distribution channels, revenue streams, and more.  About the author Erich Renz is Director of Product Management at Vuframe, an online platform for virtual product demonstrations with Augmented and Virtual Reality, where he is investigating and driving the development of AR, VR, 3D and 360° applications for businesses and consumers.
Read more
  • 0
  • 0
  • 2753
Modal Close icon
Modal Close icon