Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-oocss
Liz Tom
02 Feb 2016
6 min read
Save for later

OOCSS!

Liz Tom
02 Feb 2016
6 min read
Object Oriented Programming is something you've probably heard a lot about. While CSS is not a programming language, you can still use Object Oriented principles to write highly maintainable CSS. One problem I have when jumping into an already existing code base is figuring out how the heck everything is styled. What is the worst case scenario I've ever heard about? A project with over 100 important tags on various link classes all over the code base because an important tag was placed on the base[KB1]  of a tag[RJ2] . What happens when you inherit CSS files structured like this? Sometimes you're good and make things better, but often times a combination of time line, other things on your plate and a little bit of laziness contribute to you adding things in to make the code base even worse. What is OO CSS? First off, what is Object Oriented CSS? How abstract should you get? These are some important questions. One, Object Oriented CSS is abstracting. Two, you shouldn't be so abstract that new people coming onto the project have no idea what your classes mean. If you have a bunch of names like .blue, .padding-left-10, .big, .small, it's basically making your CSS the same maintainability as if you had inline styles. There's a good balance between abstraction and practicality. BEM! I've used BEM on personal projects and with a team. I love it. I don't want to go back to writing my CSS without using the BEM naming convention. Here's the basics: BEM stands for block__element--modifier Block is the component you're building. Element is the element of the component (example: header, text, image, icon, etc). Modifier is modifying a component. The trick is you don't want to do something like this: block__element--modifier__element or block__element--modifier--modifier. If you're using a preprocessor, you can break your files up into components so that each file deals with only one block. This makes it very easy for other developers joining a project to find your component. Don't go more than 2 levels deep. Keep your CSS as flat as possible, and it's okay to have long class names. The arguments against BEM include things like: "It makes my markup ugly." I would rather have more maintainable CSS and HTML than pretty CSS and HTML. Also, after working with BEM for about a year now, the syntax has grown on me and now I find it beautiful. If you'd like a more in depth view on BEM you should check out this great post. How Does This All Fit Together? You have to find out what works and what doesn't work for you. I've found that I prefer to think of everything on my page as various components that I can use anywhere. I borrow a little from SMACCS and a little from BEM (you can read a lot more about both subjects) and I've created my own way of writing CSS (I use SCSS; you use what makes you comfy). I like to keep layout out of the equation when writing components (thanks SMACCS!). I'm going to use generic layout classes because everyone likes their own particular framework. Basically you have a basic card that you're trying to create. You create a base component named card placed in a file named card.scss or whatever. Each component contains the elements and modifiers below them. I personally like to nest within my modifiers. A 'large card' should always have a .card__header that has a green background. I don't like to make a class like .card__header--large. I keep both the .card class in addition to the .card--large class on my div. This way I get all the classes that a card has and also modify the parts I want to modify with a --large modifier. Different people have different opinions on this but I have found this works great in maintainability as well as ease of copy-pasting markup into various parts of your page. Your CSS can look a little something like this: .card { color: $blue; } .card__header { font-size: 1.2rem; background-color: $red; } .card__header--blue { background-color: $blue; } .card__title { color: $green; } .card__author { font-size: rem-calc(20); } ... .card--large { font-size: rem-calc(40); .card__header { background-color: $green; } .card__title { font-size: 2.0rem; } } Now for your markup: <div class="column"> <div class="card"> <div class="card__header"> <h2 class="card__title">Hi</h2> <h3 class="card__author">I'm an author</h3> </div> <div class="card__body"> <p class="card__copy">I'm the copy</p> <img class="card__thumbnail"> </div> </div> </div> <div class="column"> <div class="card card--large"> <div class="card__header"> <h2 class="card__title">Hi</h2> <h3 class="card__author">I'm an author</h3> </div> <div class="card__body"> <p class="card__copy">I'm the copy</p> <img class="card__thumbnail"> </div> </div> </div> Conclusion Object Oriented principles can help to make your CSS more maintainable. At the end of the day, it's what works for your team and for your personal taste. I've shown you how I like to organize my CSS. I'd love to hear back from you with what's worked for you and your teams! Discover more object-oriented principles in our article on mutability and immutability now! From 11th to the 17th April, save up to 70% on some of our top web development titles. From Angular 2 to Flask, we're sure you'll find something to keep you learning... Find them here. About the author Liz Tom is a Software Developer at Pop Art, Inc in Portland, OR.  Liz’s passion for full stack development and digital media makes her a natural fit at Pop Art.  When she’s not in the office, you can find Liz attempting parkour and going to check out interactive displays at museums.
Read more
  • 0
  • 0
  • 3533

article-image-carthage-dependency-management-made-git
Nicholas Maccharoli
01 Apr 2016
4 min read
Save for later

Carthage: Dependency management made git-like

Nicholas Maccharoli
01 Apr 2016
4 min read
Why do I need another dependency manager? Carthage is a decentralized dependency manager for the iOS and OS X frameworks. Unlike CocoaPods, Carthage has no central location for hosting repository information (like pod specs). It dictates nothing as to what kind of project structure you should have, aside from optionally having a Carthage/ folder in your project's root folder, and housing built frameworks in Build/ and optionally source files in Checkouts/ if you are building directly from source.This folder hierarchy is automatically generated after running the command, bash carthage bootstrap. Carthage leaves it open to the end user to decide how they want to manage third-party libraries, either by having both the Cartfile and Carthage/* folders checked in under source control or just the Cartfile that lists the frameworks that you wish to use in your project under source control. Since there is no centralized source of information, project discovery is more difficult with carthage, but other than that, normal operation is simpler and less error prone when compared to other package managers. The Setup of Champions The best way to install and manage carthage, in my opinion, is through HomeBrew. Just run the following command and you should be in business in no time: brew install carthage If for some reason you don't want to go the homebrew route, you are still in luck! Just download the latest and greatest carthage.pkg from the Releases Page. Common Carthage Work-flow Create a cart file with dependencies listed, and optionally, branch or version info. Cartfile grammar notes: The first keyword is either 'git' for a repository not hosted on github, or 'github' for a repository hosted on github. Next is the location of the repository. If the prefix is 'git', then this will be the same as the address you type when running git clone. The third piece is either the branch you wish to pull the latest from, or the version number of a release with one of the following operators: ==, >= or ~>. github "ReactiveCocoa/ReactiveCocoa" "master" #Latest version of the master branch of reactive cocoa github "rs/SDWebImage" ~> 3.7 # Version 3.7 and versions compatible with 3.7 github "realm/realm-cocoa" == 0.96.2 #Only use version 0.96.2 Basic Commands Assuming that all went well with the installation step, you should now be able to run carthage bootstrap and watch carthage go through the Cartfile one by one and fetch the frameworks (or build them after fetching from source, if using --no-use-binaries) Given that this goes without a hitch, all that is left to do is add a new run script phase to your target. To do this, simply click on your target in XCode, and under the 'Build Phases' tab, click the '+' button and select "New Run Script Phase" Type this in the script section: /usr/local/bin/carthage copy-frameworks And then, below the box where you just typed the last line, add the input files of all the frameworks you wish to include and their dependencies. Last but not least Once again, click on your target and navigate to the General tab, and then go to the section Linked Frameworks and Libraries and add the frameworks from [Project Root]/Carthage/Build/[iOS or Mac ]/* to your project. At this point, everything should build and run just fine. As the project requirements change and you wish to add, remove, or upgrade framework versions, just edit the Cartfile, run the command carthage update, and if needed add new or remove unused frameworks from your project settings. It's that simple! A Note Source Control with Carthage Given that all of your project's third party source and frameworks are located under the Carthage/ folder, in my experience, it is much easier to just simply place this entire folder under source control. The merits of doing so are simple: when cloning the project or switching branches, there is no need to run carthage bootstrap or carthage update. This saves a considerable amount of time, and the only expense for doing so is an increase in the size of the repository. About the author Nick Maccharoli is an iOS / Backend developer and Open Source enthusiast working with a startup in Tokyo, enjoying the current development scene. You can see what he is up to at @din0sr or github.com/nirma. 
Read more
  • 0
  • 0
  • 3511

article-image-deployment-done-right-teletraan
Bálint Csergő
03 Aug 2016
5 min read
Save for later

Deployment done right – Teletraan

Bálint Csergő
03 Aug 2016
5 min read
Tell me, how do you deploy your code? If you still GIT pull on your servers, you are surely doing something wrong. How will you scale that process? How will you eliminate the chance of human failure? Let me help you; I really want to. Have you heard that Pinterest open sourced its awesome deployment system called Teletraan? If not, read this post. If yes, read it still, and maybe you can learn from the way we use it in production at endticket. What is Teletraan? It is a deployment system that consists of 3 main pieces: The deploy service is a Dropwizard-based Java web service that provides the core deploy support. It's actually an API, the very heart and brain of this deployment service. The deploy board is a Django-based web UI used to perform day-to-day deployment works. Just an amazing user interface for Teletraan. The deploy agent is the Python script that runs on every host and executes the deploy scripts. Is installing it a pain? Not really; the setup is simple. But if you're using Chef as your configmanagement tool of choice, take thesesince they might prove helpful: chef-teletraan-agent, chef-teletraan. Registering your builds Let the following snippet speak for itself. import requests headers = {'Content-type': 'application/json'} r = requests.post("%s/v1/builds" % teletraan_host, data=json.dumps({'name': teletraan_name, 'repo': name, 'branch': branch, 'commit': commit, 'artifactUrl': artifact_base_url + '/' + artifact_name, 'type': 'GitHub'}), headers=headers ) I've got the system all set up.What now? Basically, you have to make your system compatible with Teletraan. You must have an aritfact repository available to store your builds, and add deploy scripts to your project. Create a directory called "teletraan" in your project root. Add the following scripts to it: POST_DOWNLOAD POST_RESTART PRE_DOWNLOAD PRE_RESTART RESTARTING Although referred as Deploy Scripts, they can be written in any programming language as long as they are executable. Sometimes, the same build artifact can be used to run different services based on different configurations. In this case, create different directories under the top-level teletraan with the deploy environment names, and put the corresponding deploy scripts under the proper environment directories separately. For example: teletraan/serverx/RESTARTING teletraan/serverx/POST_RESTART teletraan/servery/RESTARTING What do these scripts do? The host level deploy cycle looks the following way: UNKNOWN->PRE_DOWNLOAD->[DOWNLOADING]->POST_DOWNLOAD->[STAGING]->PRE_RESTART->RESTARTING->POST_RESTART->SERVING_BUILD Autodeploy? You can define various environments. In our case, every successful master build ends up on the staging cluster automatically. It is powered by Teletraan’s autodeploy feature. It works nicely. Whenever Teletraan detects a new build, it gets automatically pushed to the servers. Manual control We don’t autodeploy the code to the production cluster. Teletraan offers a feature called "Promoting builds". Whenever a build proves to be valid at the staging cluster (some Automated end-to-end testing; and of course, manual testing is involved in the process) the developer has the ability to promote a build to production. Oh noes!Things went wrong. Is there a way to go back? Yes there is a way! Teletraan gives you the ability to roll back any build which happens to be failing. And this can happen instantly. Teletraan keeps a configurable numberof builds on the server of every deployed project; an actual deploy is just a symlink being changed to the new release. Rolling deployments, oh the automation! Deploy scripts should always run flawlessly. But let's say they do actually fail. What happens then? You can define it. There are three policies in Teletraan: Continue with the deployment.Move on to the next host as ifnothing happened. Roll back everything to the previous version. Make sure everything is fine; it's more important than a hasty release. Remove the failing node from production. We have enough capacity left anyway, so let's just cut off the dead branches! This gives you all the flexibility and security you need when deploying your code to any HA environment. Artifacts? Teletraan just aims to be a deployment system, and nothing more. And it does its purpose amazingly well. You just have to notify it about builds. Also you just have to make sure that your tarballs are available to every node where deploy agents are running. Lessons learned from integrating Teletraan into our workflow It was acutally a pretty good experience even when I was fiddling with the Chef part. We use Drone as our CI server, and there was no plugin available for Drone, so thathad to be done also. Teletraan is a new kid on the block, so you might have to write some lines of code to make it apart of your existing pipeline. But I think that if you're willing to spend a day or two on integrating it, it will pay off for you. About the author BálintCsergő is a software engineer from Budapest, currently working as an infrastructure engineer at Hortonworks. He loves Unix systems, PHP, Python, Ruby, the Oracle database, Arduino, Java, C#, music, and beer.
Read more
  • 0
  • 0
  • 3440

article-image-icon-haz-hamburger
Ed Gordon
30 Jun 2014
7 min read
Save for later

Icon Haz Hamburger

Ed Gordon
30 Jun 2014
7 min read
I was privileged enough recently to be at a preview of Chris Chabot’s talk on the future of mobile technology. It was a little high-line (conceptual), but it was great at getting the audience thinking about the implications that “mobile” will have in the coming decades; how it will impact our lives, how it will change our perceptions, and how it will change physically. The problem with this, however, is that mobile user experience just isn’t ready to scale yet. The biggest challenge facing mobile isn’t its ability to handle an infinite increase in traffic; it’s how we navigate this new world of mobile experiences. Frameworks like Bootstrap et al have enabled designers to make content look great on any platform, but finding your way around, browsing, on mobile is still about as fun as punching yourself in the face. In a selection of dozens of applications, I’m in turns required to perform a ballet of different digital interface interaction: pressing, holding, sliding, swiping, pulling (but never pushing?!), and dragging my way to finding the article of choice. The hamburger eats all One of the biggest enablers of scalable user interface design is going to be icons, right? A picture paints a thousand words. An icon that can communicate “Touch me for more… ” is more valuable in the spatio-prime-real estate of the mobile web than a similarly wordy button. Of course, when the same pictures start meaning a different thousand words, everything starts getting messy. The best example of icons losing meaning is the humble hamburger icon. Used by so many sites and applications to achieve such different end goals, it is becoming unusable. Here are a few examples from fairly prominent sites:   Google+: Opens a reveal menu, which I can also open by swiping left– [MG1] right. SmashingMag: Takes me to the bottom of the page, with no faculty to get back up without scrolling manually. The reason for this remains largely unclear to me. eBay: Changes the view of listed items. Feels like the Wilhelm Scream of UI design. Linked in: Drop-down list of search options, no menu items. IGN: Reveal menu which I can only close by pressing a specific part of the “off” page. Can’t slide it open. There’s an emerging theme here, in that it’s normally related to content menus (or search), and it normally happens by some form of CSS trickery that either drops down or reveals the “under” menu. But this is generally speaking. There’s no governance, and it introduces more friction to the cross-site browsing experience. Compare the hamburger to the humble magnifying glass: How many people have used a magnifying glass? I haven’t. Despite this set back, through consistent use of the icon with consistent results, we’ve ended up with a standard pattern that increases the usability and user experience of a site. Want to find something? Click the magnifying glass. The hamburger isn’t the only example of poorly implemented navigation, it’s just indicative of the distances we still have to cover to get to a point where mobile navigation is intuitive. The “Back”, “Forward”, and “Refresh” buttons have been a staple of browsers since Netscape Navigator–they have aided the navigation of the Web as we know it. As mobile continues to grow, designers need similarly scalable icons, with consistent meaning. This may be the hamburger in the future, but it’s not at that point yet. Getting physical, or, where we discuss touch Touch isn’t yet fully realized on mobile devices. What can I actually press? Why won’t Google+ let me zoom in with the “pinch” function? Can I slide this carousel, or not? What about off-screen reveals? Navigating with touch at the moment really feels like you’re a beta tester for websites; trying things that you know work on other sites to see if they work here. This, as a consumer, isn’t the base of a good user experience. Just yesterday, I realised I could switch tabs on Android Chrome by swiping the grey nav bar. I found that by accident. The one interaction that has come out with some value is the “Pull to refresh” action. It’s intuitive, in its own way, and it’s also used as a standard way of refreshing content across Facebook, Twitter, and Google+—any site that has streamed content. People can use this function without thinking about it and without many visual prompts now that it’s remained the standard for a few years. Things like off-screen reveals, carousel swiping, and even how we highlight text are still so in flux that it becomes difficult to know how to achieve a given action from one site to the next. There’s no cross application consistency that allows me to navigate my own browsing experience with confidence. In cases such as the Android Chrome, I’m actually losing functionality that developers have spent hours (days?) creating. Keep it mobile, stupid Mobile commerce is a great example of forgetting the “mobile” bit of browsing. Let’s take Amazon. If I want to find an Xbox 360 RPG, it takes me seven screens and four page loads to get there. I have to actually load up a list of every game, for every console, before I can limit it to the console I actually own. Just give me the option to limit my searches from the home page. That’s one page load and a great experience (cheques in the post please, Amazon). As a user, there are some pretty clear axioms for mobile development: Browser > app. Don’t make me download an app if it’s going to require an Internet connection in the future. There’s no value in that application. Keep page calls to a minimum. Don’t trust my connection. I could be anywhere. I am mobile. Mobile is still browsing. I don’t often have a specific need; if I do, Google will solve that need. I’m at your site to browse your content. Understanding that mobile is its own entity is an important step – thinking about connection and page calls is as important as screen size. Tools such as Hood.ie are doing a great job in getting developers and designers to think about “offline first”. It’s not ready yet, but it is one possible solution to under the hood navigation problems. Adding context A lack of governing design principles in the emergent space of mobile user experience is limiting its ability to scale to the place we know it’s headed. Every new site feels like a test, with nothing other than how to scroll up and down being hugely apparent. This isn’t to say all sites need to be the same, but for usability and accessibility to not be impacted, they should operate along a few established protocols. We need more progressive enhancement and collaboration in order to establish a navigational framework that the mobile web can operate in. Designers work in the common language of signification, and they need to be aware that they all work in the same space. When designing for that hip new product, remember that visitors aren’t arriving at your site in isolation–they bring with them the great burden of history, and all the hamburgers they’ve consumed since. T.S. Eliot said that “No poet, no artist of any art, has his complete meaning alone. His significance, his appreciation is the appreciation of his relation to the dead poets and artists”. We don’t work alone. We’re all in this together.
Read more
  • 0
  • 0
  • 3416

article-image-packt-explains-deep-learning
Packt Publishing
29 Feb 2016
1 min read
Save for later

Packt Explains... Deep Learning

Packt Publishing
29 Feb 2016
1 min read
If you've been looking into the world of Machine Learning lately you might have heard about a mysterious thing called “Deep Learning”. But just what is Deep Learning, and what does it mean for the world of Machine Learning as a whole? Take less than two minutes out of your day to find out and fully realize the awesome potential Deep Learning has with this video today. Plus, if you’re already in love with Deep Learning, or want to finally start your Deep Learning journey then be sure to pick up one of our recommendations below and get started right now.
Read more
  • 0
  • 0
  • 3415

article-image-unity-53-what-you-need-know
Raka Mahesa
15 Jun 2016
6 min read
Save for later

Unity 5.3: What You Need to Know

Raka Mahesa
15 Jun 2016
6 min read
If you're a game developer, there's a big chance you're using Unity as your game engine of choice. Unity, and game development itself, has grown quite complex since its first inception, so it's getting hard to keep track of what features are included in a game engine and which ones are not. So let me try to help you with that and see what the latest version of Unity, version 5.3, has in store for you. Multi Scene Editing Let's first talk about what, in my opinion, seems to be the biggest addition to the arsenal of tools available for Unity users: Multi Scene Editing. So what does Multi Scene Editing do? Well, to put it in a simple way, this new editing capability allows a scene to have other scenes in it. With this new feature, you can have a "House" scene that has "Bedroom" and "Kitchen" scenes in it instead of one big "House" scene that has all of the needed objects. You can edit all of them together or just edit one scene separately if you want to focus on editing a particular scene. This new feature doesn't just help you manage a bigger scene, it also helps you working together with another developer, since now each person can edit the same scene separately with less chance of conflict. The following screenshot from the Unity website will give you an idea of what this looks like:   Using Multi Scene Editing is quite simple. All you have to do is drag a scene (or more) to the hierarchy window of your current scene. Unity will then automatically add that scene and all its objects to that open scene. The hierarchy window will show which scene has which objects, so you don't need to worry about being confused over that. Another thing to note is that Unity will set your base scene as the active scene, meaning that any change you made to the scene during runtime will be applied to that scene by default. If you want to access a scene other than the active one, you can do so by using the SceneManager.GetSceneAt() function. Also, Multi Scene Editing supports both 3D and 2D scenes, so you don't have to be concerned about how many dimensions your game has. Speaking of 2D... More Physics2D Tools Unity 5.3 added a bunch of new 2D physics tools that you can use to create various compelling mechanisms for your game. The biggest of the bunch is the BuoyancyEffector2D, a new physics tool that can easily help you make objects floats (or sink) in water based on their weights. While previously you can simulate the floating effect using real world physics equations, this new tool is a tremendous help if you don't want to dabble in physics too much or just want to quickly prototype your game.   Another really useful 2D physics tool that was introduced with this version is the FrictionJoint2D tool. While the 2D physics engine is mostly relevant for platformer type games, sometimes you may also want a little physics for your top down 2D games. So if you, for example, want enemies in your top down brawler game to be able to get knocked back, congratulations, FrictionJoint2D enables you to do exactly that. Other than that, there's also the new FixedJoint2D and RelativeJoint2D tools that allow objects to be glued to each other and the TargetJoint2D tool that allows an object to keep following another object. In-App Purchase Support For quite a long time, Unity has left the implementation of in-app purchase up to third parties, and so plenty of plugins that help developers integrate IAP flourished in the asset store. Well, in the latest version, apparently Unity decided to get more hands-on, adding in-app purchase integration directly to the engine. Right now the integration supports the official app stores for iOS, Android, Mac, and Windows. Unity's in-app purchase integration has a similar approach and functionality to most IAP plugins in the market today, so if you're currently using a third-party plugin for your IAP needs, you're not in any hurry to replace it. That said, Unity is usually pretty quick to adapt to changes in the platforms they support, so you may want to consider switching to Unity's IAP integration if you want to use the latest feature of the platform of your choice. More iOS Integration On their last few product announcements, Apple introduced various additions to the iOS platform, both hardware and software wise. Although previously you needed to use external plugins to utilize those additions, Unity version 5.3 finally supports these new features so you don't have to rely on third-party plugins. These new iOS-specific additions are 3D Touch support, iPad Pro pen support, as well as support for app slicing and on-demand resource features that are heavily used for Apple TV apps. JSON API Another really welcome addition that came with Unity 5.3 is an official API for handling the JSON format. While at a glance, it may not seem that significant, JSON is a format that is widely-used for storing data, so an official support from Unity is really good news. That said, the implementation of JSON API from Unity differs quite a lot from some of the popular JSON libraries out there, so switching the library you used isn't going to be a simple matter. As you can see, there are definitely a lot of new features in Unity 5.3. There are more features that are worth talking about, including the new version of MonoDevelop, the new features on the particle system, or the new samples for the VR project. That said, all of the items I listed here are the features that, in my opinion, you need to know the most. Feel free to check out the changelog yourself if you want to know more about Unity 5.3. About the author Raka Mahesa is a game developer at Chocoarts who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99"
Read more
  • 0
  • 0
  • 3392
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introduction-juju
Adam Israel
16 Nov 2015
5 min read
Save for later

An introduction to Juju

Adam Israel
16 Nov 2015
5 min read
You've finished that application you've been working on for weeks or months and you're ready to show it to the world. Now you login to your favorite cloud, launch instances, setup monitoring, security groups and firewalls. All of that is well and good, but it's tedious work. You have to learn how to use all of your clouds proprietary bits and services. If you decide to move to a different cloud later, you'll have to learn that provider’s quirks and possibly rewrite portions of your application that depended on cloud-specific services. Juju solves that problem (and more). Juju is a cloud orchestration and service modeling toolkit. With it, you can deploy your application to clouds like Amazon, Azure, Google Compute Engine (GCE), Joyent or Digital Ocean, or to your own bare metal via MAAS. Best of all, Juju does this in a repeatable, reliable fashion with a few simple commands. Why should you care? Being in DevOps means being agile. It provides the fast iteration of writing code, testing code, and deploying code. And Juju embraces the DevOps philosophy by taking the tedious and time-consuming tasks and making them nimble. In the past, I managed dozens of servers with a set of bash scripts wrapped around ssh and rsync to deploy updated code, and manually managed database and memcached clusters as well as load balancers. Later, we upgraded to a Puppet and Kickstart workflow. Each method worked, and was an improvement on the previous, but neither was spectacular. I wish I'd known about Juju at the time. I would have spent way less time deploying and more time coding. The anatomy of Juju A Charm is a structured collection of files that describe what you're installing, how to install it, its configuration options and what other service(s) it speaks to. . ├── actions │ ├── backup │ └── benchmark ├── hooks │ ├── config-changed │ ├── install │ ├── start │ ├── stop │ └── upgrade-charm ├── config.yaml └── metadata.yaml Actions are scripts that run against your application. Not all charms have them, but those that do usually encapsulate administrative tasks. They can also run benchmarks, to analyze the performance on your application, for example, across different clouds or hardware configurations in order to find the best balance between cost and performance. Hooks are things that run in response to something happening. They are also idempotent, meaning that they can be run multiple times with the same result. The standard set of hooks include: The install hook is the first executed. As the name implies, it installs any software needed to run a program. The start and stop hooks start or stop your application. The config-changed hook is executed any time one of the options defined in config.yaml is changed. The upgrade-charm hook handles updates to the charm or your application. As a user, these hooks are executed for you when certain events happen. As a developer, there are a series of best practices to help you write charms that fit the above model. Relationships Relationships define the services your application interacts with. When a relationship is joined, each side exchanges information. This handshake may include credentials, hostnames and ports, filesystem locations and more. Hooks, like the ones above, are executed when the state of a relationship changes. For example, adding a relation between your application and a database would fire the database-relation-joined event, which would provide you with the host name, database name, and credentials to use. The database would receive the host name of your application, allowing it to set ACLs to secure itself. This removes the need for editing configuration files by hand or script. A simplistic example of relations is Wordpress. In its metadata.yaml it declares that it requires a database, and can optionally use memcached, as visualized here in a screenshot taken from the Juju GUI. The tiles represent deployed services, and the line between them represents the relationship. To achieve this model, we run the commands: $ juju deploy mysql --constraints="cpu-cores=8 mem=32G" Added charm "cs:trusty/mysql-29" to the environment. $ juju deploy memcached Added charm "cs:trusty/memcached-11" to the environment. $ juju deploy wordpress Added charm "cs:trusty/wordpress-3" to the environment. $ juju add-relation wordpress mysql $ juju add-relation wordpress memcached $ juju expose wordpress Scalability As cool as charms and relationships are, you see what they can really do when it comes to scalability. In the above example, you can simply add machines to the Wordpress charm, which will automatically configure load balancing between each machine. Bundles So far, we've talked about charms and relationships. The real magic happens when you put them all together to create a model of your workload. We can create a bundles.yaml file that describes the services to deploy, any configuration options that should be changed from their default, and each services relation to each another using this bundle.yaml: $ juju quickstart bundle.yaml What's next In my next post, we'll explore real world examples of these concepts, including how to take the pain out of deploying OpenStack or Big Data solutions. About the author Adam Israel has worn many hats over the past twenty years, from help desk to Chief Technical Officer, from point of sale software to search engines and ad server platforms. He currently works at Canonical, Ltd as a Software Engineer, with focus on cloud development and operations.
Read more
  • 0
  • 0
  • 3386

article-image-say-hello-deepmap
Graham Annett
26 Jul 2017
5 min read
Save for later

Say hello to DeepMap

Graham Annett
26 Jul 2017
5 min read
Yet another company has emerged, with the announcement of DeepMap, making huge strides in the world of self-driving cars. The web page and announcements mention that many of the engineers are former Google and Apple employees (both seem to make maps and directions a huge aspect of their products, with Google maps and Apple maps being used by a vast amount of people day-to-day), which is incredibly promising because these sorts of engineers are probably exquisitely insightful and have unique ideas that may not have been carried out at an already established company, or may have taken much longer to come to fruition. Self-driving vehicles             From a basic standpoint, self-driving vehicles are built around taking whatever input the car has, and generating an output that relates to speed and direction (i.e. for a traditional driving image, what speed and direction should the car use to keep on a path to whatever destination and avoid crashing). Many of the startups doing this seem to be focused on using image-based deep learning systems as the underlying technology (typically, convolutional neural networks). These systems have made incredible strides in recent years along with the companies implementing these autonomous vehicles having made tremendous progress (think Google’s self driving car, or Tesla, who recently hired Andrej Karpathy as head of AI). There has also been a recent scramble for many other companies to enter the autonomous vehicle arena, and create competitive offerings (for instance the recent company Andrew Ng has been associated with) and even recent scandals such as the Google-Uber lawsuit. These events are signs that this technology is going to become incredibly commonplace very soon, and will be an integral technology that people will come to expect in day-to-day life, somewhat akin to smartphones. LiDAR systems             One of the interesting things after looking into DeepLearning is that the company and it’s underlying technology seems to be heavily focused on LiDAR systems versus the approach that many other companies seem to be using with camera/image based mapping. LiDAR is different in that it depends on pulsing lights to create a representation of 3D surfaces around it (quite an oversimplification). While I'm not an expert in autonomous vehicles, I’m guessing that a combination of LiDAR-based and image-based approaches will make for the first true autonomous vehicle in that simply relying on one type of data is too dangerous when the stakes of self-driving cars carry huge implications for the technology companies behind them. A continuously updating and dynamic system             After reading through the introduction post by the cofounders, James Wu and Mark Wheeler, I was stuck by the fact that the company raised a sizable amount of money for something in stealth, and also by the many novel explanations and ideas in the post. One of the ideas that struck me as incredibly profound is viewing maps not as a static image that may be useful to humans, but as a continuously updating and dynamic system that incorporates an entire data stack and is useful to a machine such as a self-driving car. This may be incredibly obvious to people already in the autonomous vehicle industry, but as an outsider, it made me think that maps without a dynamic and huge data stack underlying them will not only be useless to autonomous vehicles, but perhaps even dangerous. The map that humans see and glean knowledge from compared to what would be useful to machines would be fundamentally different, and makes me curious about the implications in other realms that deep learning and “AI” will have (for instance in NLP and time series data). Healthy competition             Having actual competition among companies in the pursuit of a true self driving car (there is a 5 level SAE classification and while I am not entirely sure where Tesla or Google's vehicles currently stand, no one has yet to achieve level 5, which is what would be necessary for autonomous cars to most likely replace large swaths of industry and have worldwide impact) is a revolutionary achievement from a technological standpoint as it creates and encourages companies to try new and novel approaches on a field that will likely never be fully ‘solved,’ but in need of constant and continuous improvement much like other technology fields. In conclusion, the technical ideas that DeepMap is bringing along with yet another company to push forward the prospects and chances of autonomous vehicles becoming commonplace is incredibly promising and something to keep an eye on. Hopefully the products and technology they claim to be working on will be as groundbreaking as they propose, and not just crash and burn out like many technology startups seem to do. About the author Graham Annett is an NLP Engineer at Kip (Kipthis.com).  He has been interested in deep learning for a bit over a year and has worked with and contributed to Keras (https://github.com/fchollet/keras).  He can be found on Github at http://github.com/grahamannett or via http://grahamannett.me.
Read more
  • 0
  • 0
  • 3378

article-image-does-size-matter-tech-lives-small-and-large-businesses
Richard Gall
01 Sep 2016
9 min read
Save for later

Does Size Matter? The Tech Lives of Small and Large Businesses

Richard Gall
01 Sep 2016
9 min read
“Modern business is strange” Modern business is strange. And this, really, is down to technology. Although the dot com bubble may have blown up almost twenty years ago, it’s really the effects of the open source revolution of the past decade that we’re feeling today. This was a period that really can be called innovative. It was a time of experimentation and invention, when the rigid structures of traditional IT (and the stereotypes of the traditional IT team that existed alongside them) all but disappeared. Everything seemed to fragment. And this includes business culture itself. In its place, the sharp and smart startup became a new signifier for 21st century business. Gone was the stuffy boredom of the drab office (see Office Space for an expert example), in favour of something more creative and casual – something where technology rules and no longer simply serves. Perhaps this is a bit of a caricature. But there’s some truth to it – software culture, from the software itself to its project management methodologies have all had an impact on the way that even the largest companies see themselves. And why should we be surprised? When a multibillion dollar company like Facebook started life in a college dorm, we’re reminded that we’re living in a new business landscape, where technology has flattened barriers to entry. When size no longer seems to matter. But does size really not matter anymore? We looked closely at the technological differences between large and small companies in our 2016 Skill Up report, published earlier this summer in July. The picture it paints is interesting. While it highlights that the software used by businesses does shift according to size, it also demonstrates that there is a lot of crossover. Size matters. Sort of. The concerns and challenges that many organizations are facing are perhaps more similar than we might have thought.  Small Businesses Let’s first look at the tools being used by survey respondents working in smaller companies. It’s worth noting that micro refers to organizations of 1-20 employees, small 21-99, medium 100-1000 and large any number exceeding that.   A large proportion of the data here has come from respondents working for small development startups – these are likely companies that wouldn’t even exist without these useful tools and frameworks – they’re essentially small teams that have come together to take advantage of what these tools are capable of. Unreal is there at there at the top – on the one hand, yes this shows it’s up there with Unity when it comes to game development tools, but it also suggests that game development is an industry that is thriving on small development teams. It’s a huge industry, and perhaps one that’s growing due to a combination of developer enthusiasm and entrepreneurialism. It’s also worth noting Blender’s appearance as well as Android Studio. Android Studio is not quite as popular with micro businesses (startup size), but overall it is a crucial tool for smaller businesses. This perhaps underlines the importance of mobile for modern businesses – whether these developers are working for software development companies specifically or companies that believe they need mobile solutions to meet customer needs, it’s clear that Android Studio is proving to be incredibly popular. You can find our game and mobile development Skill Plans in Mapt here to learn more. It’s also notable that WordPress performs so well here. It’s a tool that has expanded beyond the tech and developer sphere, becoming a mass market tool to just about anyone who wants to write content for the web. But what about the frameworks right at the heart of the web development world? It’s hard to pick a clear narrative thread here – Meteor is pretty high on the list here among developers in small businesses, a tool at the cutting-edge of web development, but to confuse us PHP is right next to it, a framework that’s unlikely to be called ‘cutting-edge’. It’s likely that what we’re seeing here is simply a fact of the fast-paced reality of web development, perhaps the one area of software where keeping up with change and new frameworks is most difficult. Front end or full-stack development - we've got a range of carefully curated Skill Plans on some of the hottest frameworks being used today, including React and Angular 2. One only has to glance at the diverse range of frameworks listed here to see that it’s difficult to characterize small businesses in any other way than simply asserting that diversity is a fact of life – there are a huge range of tools available for performing similar tasks and building similar types of products, all offering different features, and all having different benefits depending exactly on what you need to deliver. One way of looking at it is saying that it’s about those marginal gains. The most popular software are the tools that make your life easier, that fit comfortably into your workflow and can help you build the type of products that you (or, indeed, your clients) want. Depending on what you’re currently or have been using, what the purpose of the product is going to be (is it an app? A game? Does it need to be dynamic? Or would something more static do a perfectly good job?).  Technology used in larger organizations If it’s all about diversity, and finding tools that fit comfortably into the way you work for smaller organizations, what about larger companies? As you can see, the list does look a little different: The trend here, broadly speaking at least, are for operational tools rather than specific frameworks. For the most part it’s all about data, cloud, and virtualization – terms that have unfortunately become somewhat empty buzzwords for vendors trying to shift their clunky enterprise software. But that’s not to say that anything listed here should be characterized as clunky. Tableau, sitting pretty at the top of the table, makes data visualization pretty simple – it makes Business Intelligence accessible and easily manageable. It’s worth noting that a similar proportion of respondents from micro-sized businesses are working with Tableau as medium sized businesses. This means that while it’s a tool that remains in a tradition of enterprise analytics, its appeal is not limited to the world of corporate tech. Elsewhere on the list there are clear nods to the ongoing importance of Big Data; Hadoop and Spark are clearly popular. But again, we’d be wrong to assume these are only of interest to those in large businesses. For software pros in medium and smaller organizations these tools are playing an important role in business strategy too. Whatever you want to do with data, our data science Skill Plans are built around you. Whether you want to dive deeper into Machine Learning with Python or stay up to date in Big Data, simply sign in and find what you need here. Virtualization and cloud tools also emerge as important tools for large businesses. This is evidence of an increasing need to take control of their software infrastructure for reasons relating to both security and resource efficiency. This is perhaps of less concern to smaller organizations for whom free tools can facilitate collaboration and resource sharing for no cost. But the impetus for these tools that can virtualize or ‘abstract’ resources comes from a similar place as that diversification we saw earlier. Fundamentally, it’s all about finding solutions that fit around your problems. Arguably, these solutions may feel like new challenges for large organizations who may, but by spending some time reflecting what’s important for their business, this relative freedom can be essential to success. DevOps Engineers are crucial if we're to unlock the full potential of cloud and virtualization. Learn more in our DevOps Engineer Skill Plan. Size matters when it comes to tech. Sort of… Both graphics demonstrate some slight differences in focus between larger and small organizations – but the focus of both is ultimately on making those marginal gains, whether that’s in terms of the right framework for the job or effective use of virtualization to use tight resources in a more intelligent way. But more than that, while we may be able to discern shifting focuses that correlate to company size, there is nevertheless a lot of crossover in what’s important. Strangely, there appears to be more common ground between the ‘micro’ sized businesses and the large ones. This suggests that when it comes to properly taking advantage of software, the organizations that are in the best position are those with the resources and time to invest in skill development and learning, or those that are streamlined and agile enough – maybe simply enthusiastic enough – to keep up to speed with what’s new and what’s important. Stuck in the middle The real challenge, then, is for those organizations in the middle. They might have ambitions to become an industry giant but lack the resources. They might even be attracted to the startup mentality but are burdened with legacy systems and cumbersome processes. To take advantage of some of these massive opportunities will require detailed, even critical, self-reflection and a renewed focus on organizational purpose and strategic priorities. It also means these organizations will need to fashion a different tech-oriented culture. Startup, small, medium or large business – with Mapt for Teams, you can ensure everyone has the resources they need to learn the skills most important for your projects – for today and tomorrow. Learn more here. 
Read more
  • 0
  • 0
  • 3369

article-image-open-source-making-tech-training-harder
Hari Vignesh
16 Oct 2017
5 min read
Save for later

Is open source making tech training harder?

Hari Vignesh
16 Oct 2017
5 min read
The open source software movement has sparked an incredibly rich community of collaborative software developers producing wave after wave of applications. What started as a lofty ideal has become the norm. As many as 93 percent of organizations use open source software and 78 percent run part or all of their operations on it, according to The Tenth Annual Future of Open Source Survey.  Most open source software projects come to life because someone is trying to scratch an itch. A  group of coders, or a team of academics, or a fast-moving startup will build some software that solves a very real computing problem, and then they’ll open source the code, sharing it with the world at large. Maybe the coders are trying to help the larger world of software developers, believing that others will find the code useful too. Maybe they’re trying to get more eyes on their code, hoping that others will contribute bug reports and fixes to the project. Or maybe, as is typically the case, they’re trying to do both.  The popular data-crunching tool Hadoop is a great example. Doug Cutting and Mike Cafarella started Hadoop to solve scalability problems they had with their open source search engine software, Nutch. Then Yahoo saw the work they were doing, realized it would be useful, and hired Cutting to develop it further. Soon, other companies like Facebook and eBay joined in as well. Today, Hadoop is used by countless companies to crunch data, and several commercial outfits have sprung up to support and develop the software and its ecosystem.  There are a nearly endless number of open source projects that have evolved along similar lines, including the Apache web server, the Ruby on Rails programming framework, and, of course, the Linux operating system. But in recent years, we’re seeing many old-school tech companies– companies that predate the recent open source revolution – use open source in very different ways. Now, companies like Microsoft, Cisco, and Salesforce are creating new open source projects, mainly as a means of promoting new or existing products and services. But by adopting open source projects, will it really make the training of your team hard? Well, it really depends on the type of project that you decide to go with. Your team’s learning curve will be high if you fail in the below metrics. Choosing the type of OSS There are different types of OSS; it can be as small as a plugin, or a small library to the enterprise application itself. So if it’s a library or plugin, which performs one or two functions, or features in your product, it is really advisable to go with it. Because, if you need a small modification or something to be built on top of it, not much training is required — unless it’s a poorly rated library.  But if you want to go with big enterprise software, you need to think more than twice before adopting it. If you’re selling a product, it’s advisable to build from scratch, so that you will have more control over it. If not, if it’s for internal use, you can clearly go for OSS — no matter how bad it looks or behaves, only internal employees will be using it. Developer’s mindset Many developers will not wish to work in a legacy code base. They always prefer to start from scratch. But if they are exposed to a legacy code base, on top of some other OSS, they won't be pleased. Definitely, the learning curve will be more because they need to understand the current system, and on top of that, they need to understand the OSS codebase as well and how it’s tailored to the current system.  If the OSS is a famous one like Hadoop, where the support is enormous and the developer community is quite active, you will have more developers who are exposed to the software and you will definitely get skilled professionals to tailor it to your needs. But if the OSS is not famous, your team will face a greater learning curve. Be prepared for patches and updates Updates and patches will fly continuously if the project is active. Software is never 100 percent perfect and as it is OSS, the team availability will be less and bug fixes won't be immediate. So you need a lot of patience, and there is the possibility that a bigger update will roll out as well,  which you should not let affect your tailored modules. On those occasions, your team’s learning curve will be high.  As discussed previously, software support really matters. There are a few OSS companies who provide support for free or for cost as well . This helps to keep your development cost and the learning curve spike to a minimum. But if the support does not exist, you have no choice but to be patient with your tech team. They really need time to understand the system and tailor it.  About the Author  HariVigneshJayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades. 
Read more
  • 0
  • 0
  • 3368
article-image-what-standards-are-needed-iot
Raka Mahesa
17 Apr 2017
5 min read
Save for later

What standards are needed in IoT

Raka Mahesa
17 Apr 2017
5 min read
The Internet is one of the greatest inventions of the 20th century. It started its life in the mainframe computers of military and academic organizations before making a jump to personal computers near the turn of the century. A few years later, the Internet made another jump to mobile phones and enabled us to connect to the World Wide Web anywhere and anytime we wanted. Each time we connect a device to the Internet, the device seems to become smarter and more useful; so, what if we connect every device to the Internet? This is happening, and this is how we have created the Internet of Things, also known as IoT. With all that said, it's important to keep in mind that IoT is not simply about having Internet connectivity on our devices. IoT is about a network of devices,or things, that can communicate with each other so that they can be more useful to their user. You may wonder how does having devices that communicate with each other make them more useful? Well, just imagine, when you set the alarm on your phone to wake you up at 7 AM, it will automatically tell your lamps to turn on 10 minutes after that, and your coffee machine to start brewing right away,or when you exit your house, your networked gate will tell your air conditioner to turn itself off to conserve electricity, or just a simple communication like turning on the heater from your phone on your way home. That is IoT on a small scale. If you go bigger, you can get grander systems like a bus system that dynamically allocates itself based on how crowded a bus stop is,a fully networked car fleet that can prevent traffic gridlock from happening,or an automated agricultural system that farm owners can control from an app on their phones. While governments all around the world are trying their best to build their own connected cities, the small scale IoT unfortunately doesn't really reach the masses. Even a simple house automation system is limited to either the hobbyist or just a single type of device,like the Phillips Hue. One of the reasons behind this problem is the lack of unifying standards between devices, and when every component behaves differently, it can be difficult to create a fully networked system. (image from http://techstory.in/wp-content/uploads/2016/06/Smart-Home-2.jpg) Before we talk further about standards, specifically for IoT, we first need to discuss technological standards in general. What are standards? What's the benefit of having standards in technology? To put it simply, standards are a set of rules that specify how a certain thing should behave. When a piece of technology follows a standard, all the parties involved know how the device should work and can focus on other, more important things. For example, since all audio jacks on a phone follow the same standards, users can buy headphones of any brand without any worry, and headphone manufacturers can focus on the audio performance of their accessories because they know it will work on every buyer's phone. In a world without standards, the users of,say,a Samsung phone would only be able to use a Samsung headphone. What if Samsung's headphones don't have the quality you want? What if the only good headphones available cannot be used on Samsung phones? By following a standard, a company can focus only on their phones while another company can focus only on their headphones, each playing through their strength. (image from https://www.extremetech.com/wp-content/uploads/2016/09/35jack.jpg) So now that we better understand standards, what kinds of standards do we need for the Internet of Things? The most important standard that's needed in the Internet of Things at the moment is a communication standard—a set of rules for communicating between devices. One part of this standard should be about hardware, for example, what kind of connectivity should be used for communication. Should it be WiFi? Should it be bluetooth? Should wired connection even be considered? The other part of the needed communication standard should be about software. For example, what messaging protocol should be used for communication? What kind of encryption should be used for the message packets? We should also keep in mind that these connected devices may not have the computing power needed to process heavily encrypted messages. Another standard that is needed is a development standard. Software developers need a standardized way of developing for connected devices. Right now, all of the IoT hardware platforms, such as Arduino, Tessel, and Intel Edison, have their own development environment. Imagine if an app that could run on an LG phone couldn't be executed on a Xiaomi phone. One other standard that we need for the Internet of Things is a standard for security or authentication. How do we ensure that the person accessing a device is the same person accessing another device in the system? Right now, authentication is handled by creating an account on our phone, but a better, independent authentication method is sorely needed in the Internet of Things. We still have a long way to go before we can fully realize the potential of the Internet of Things. Right now, a lot of big companies are competing with each other, pursuing their own platform for connected devices. I believe, however, the way for IoT to move forward is for these companies to start collaborating and setting standards that will define the future of Internet of Things. About the Author Raka Mahesa is a game developer at Chocoarts (http://chocoarts.com/) who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Healso regularly tweets from @legacy99.
Read more
  • 0
  • 0
  • 3272

article-image-uncovering-secrets-bitcoin-blockchain
Alex Leishman
10 Aug 2015
5 min read
Save for later

Discover the Secrets of the Bitcoin Blockchain

Alex Leishman
10 Aug 2015
5 min read
In a previous post Deploy Toshi Bitcoin Node with Docker on AWS you learned how to setup and run Toshi, a Bitcoin node built with Ruby, PostgreSQL and Redis. We walked through how to provision our AWS server, run our Docker containers and access Toshi via the built-in web interface. As you might notice, it takes a very long time (~two weeks) for the blockchain to sync and for the Toshi database to be fully populated. However, Toshi sees new transactions in the Bitcoin network once it starts running, which allows us to start playing with the data soon after it is up and running. In this post we will walk through a few simple ways to analyze our new blockchain data to gain insights into and uncover some mysteries of the Blockchain. First we need a way to access our database in the toshi_db Docker container. I’ve found that the easiest way to do that is to setup a new container running ‘psql’, the PostgreSQL interactive terminal. Run the command below, which will take you to the psql command line inside a new container. toshi@ip-172-31-62-77:~$ sudo docker run -it --link toshi_db:postgres postgres sh -c 'exec psql -h "172.17.0.3" -p "5432" -U postgres' psql (9.3.5) Type "help" for help. postgres=# Remember to replace my Postgres IP with your own. Any commands run from here are actually being done IN the psql container. Once we quit psql (‘q’), the container will be stopped. Let’s explore the data! schema.txt There are 37 tables in Toshi. Let’s find out what some of them have to offer: 1_query.txt Here we queried the first two entries in the addresses table. We can see that Toshi stores the ’total_received’ and ‘total_sent’ values from each address. This allows us to calculate the balance of any bitcoin address with ease, something not possible with bitcoind. We also queried the count of entries in the ‘addresses’ table, this value will continually change as your data syncs and new transactions are created in the network. Another interesting column in this table is ‘address_type’, which tells us whether an address is a traditional Pay to PubKey Hash (P2PKH) address or a Pay to Script Hash (P2SH) address. In the ‘addresses’ table, P2PK addresses have ‘address_type = 0’ and P2SH addresses have ‘address_type=1’. Querying the amount of P2SH addresses can help us estimate the percentage of Bitcoin users taking advantage of multi-signature addresses. To learn more about different address types, look here. If we want to learn more about a specific table in the Toshi DB, we can use the same ‘d’ command followed by the table name. For example: 2_query.txt This lists the columns, indices and relations of the ‘unspent_outputs’ table. Now, let’s look for something a bit more fun – hidden messages in the blockchain. The ‘op code’ OP_RETURN allows us to create bitcoin transactions with zero value outputs used to store a string of data. The hexadecimal code for OP_RETURN is ‘x6a’. Therefore, we can search for OP_RETURN outputs using the following query: 3_query.txt If we convert each of the script hex values to UTF-8 (online tool), we get some interesting results. Most of them are just noise (they may be encrypted), but there are a few decoded values that stand out: (CaMarche!ޛKꈶ-誗�bَˆګ㟟ǒo⁏ˀ "OAu=https://cpr.sm/aNSAyIRJSr (DOCPROOFv㒶nۗ䧴꠿کRS:Cw-J䙻$:_䌽 (Test0004嬟zN畆oﱵ联؞¬⋈ǦQ<㺏֪怀 The first result includes the French expression “Ça marche”, meaning “ok, that works”. The second result includes a URL that leads to the JSON description of a colored coin asset. To learn more about colored coins, check out Coinprism. The third result includes the text ‘DOCPROOF’, which indicates that the output was used for proof of existence, allowing a user to cryptographically prove existence of a document at a point in time. The last result looks like somebody just wanted to play around and test out OP_RETURN. Lastly, if we want to export the results of a query from our container we can copy it to SDOUT and then export it from the container log afterwards. 4_query.txt If we quit psql (‘q’), we find the name of our previously used psql container (now stopped): ubuntu@ip-172-31-29-91:~$ sudo docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c66179a35dbe postgres:latest "/docker-entrypoint. 2 minutes ago Exited (0) 15 seconds ago sad_shockley Then we can export the log into a CSV file: sudo docker logs sad_shockly > data.csv You can now ‘scp’ this CSV file back to your local machine for further analysis. Note that this file will include all commands and outputs from your psql container. It may require some manual touch-ups. You can always start a new psql container for a fresh log. So, in just a few minutes we were able to create a new psql Docker container, allowing us to explore blockchain data in ways that are impossible or very difficult to do with bitcoind. We discovered messages that people have left in the blockchain and learned how to export any queries we make into a CSV file. We have only scratched the surface, there are many insights yet to be discovered. Happy querying! About the Author Alex Leishman is a software engineer who is passionate about Bitcoin and other digital currencies. He works at MaiCoin.com where he is helping to build the future of money.
Read more
  • 0
  • 0
  • 3270

article-image-5-amazing-packages-working-better-r
Sam Wood
28 Sep 2016
2 min read
Save for later

5 Amazing Packages for Working Better with R

Sam Wood
28 Sep 2016
2 min read
The R language is one of the top choices of data scientists - in no small part to the great packages and projects created to support it. If you want to expand the power and functionality of your R code, consider one of these popular and amazing options. ggplot2 ggplot2 is a data visualization package, and widely believed to be one of the best reasons to use R. ggplot2 produces some of the most fantastic graphics and data visualizations you can get. It's so popular, there's even a port to use it in Python - R's great rival. ggplot2 takes care of many of the fiddly details of plotting, leaving you to focus on interpreting your data insight. Shiny R's always been just about the data - but Shiny lets you take it onto the web! Without knowing any HTML or CSS you can use Shiny to turn your R code into interactive web applications. From interactive visualizations to exceptional web interfaces, you'll be amazed what you can build with Shiny without a single line of JavaScript. knitr Use knitr to output your R analyses as dynamic reports in LaTex, Markdown and more. Developed as part of the Literate Program for reproducible research, knitr is the perfect tool for ensuring the the documentation for your R analysis is clear and understandable to all. packrat Have you ever gotten frustrated with trying to figure out dependency management in R? Packrat is there to help you for all those annoying situations where installing one library makes another piece of code stop working. It's great for ensuring that your R projects are more isolated, more reproducible, and more portable. How does it work? Packrat stores your package dependencies inside your project directory instead of your personal R library, and lets you snapshot the information packrat needs to recreate your set-up on another machine. stringr Strings aren't big or fancy - but they are vital for many data cleaning and preparation tasks (and who doesn't love clean data?) Strings can be hard to wrangle in R - stringr seeks to change that. It provides a simple modern interface to common string operations, to make them just as easy in R as in Python or Ruby.  
Read more
  • 0
  • 0
  • 3226
article-image-material-design-best-practices
HariVigneshJayapalan
04 Apr 2017
5 min read
Save for later

Material Design Best Practices

HariVigneshJayapalan
04 Apr 2017
5 min read
If you’re an Android/hybrid app developer, you’ll probably be using Material UI components in your app. However, for every design pattern, there are a few basic UX concepts. Coupled with the UX concepts, the usability of the app and the user’s experience will bloom and flourish. This article will be showcasing a handful of best UX practices for Material Design components. What is “Material”? A material metaphor is the unifying theory of a rationalized space and a system of motion. The material is grounded in tactile reality, inspired by the study of paper and ink, yet technologically advanced and open to imagination and magic. Learn more about Material at Material.io. Derivation of best practices All of the best practices showcased here are derived by assuming that the user is a beginner with the smartphone. The core ideology is not to make the user think even for a moment. Components to focus There are 25+ components under Material Design. We’ll be focusing our best practices with respect to the below components: RecyclerView Tab Layouts Best practices for RecyclerView RecyclerView is a flexible view for providing a limited window into a large data set. RecyclerView supports representing homogenous and heterogenous data in two ways Linear style (vertical and horizontal) Grid style (fixed and staggered) Peek-A-Boo problem A Peek-a-boo problem will mostly occur for listing things horizontally. When we’re representing data in the form of list, a typical question to the user is: how will the user know that the content can be scrolled (right to left)? Or how will you tell the user that there is more content down the line? For example, please consider the following image. All the list items are arranged within the viewport, and for a moment, the user will think about what to do next. So how do we solve this problem? The solution is very simple. We need to display a half or quarter content of the last item next to the viewport. In short, the next item by the end of the viewport should be half visible. This visual representation will automatically trigger the brain to notice the half visible content and it will automatically drag the list to reveal it. In order to maintain responsiveness in all devices and tackle this problem, we need to consider a few calculations. We need to calculate the viewport width and dynamically set the width of each ViewHolder. This is just a one way of solving this problem. The good news is that we have an alternate approach as well. Alternate approach tothe “Peek-A-Boo” problem In our alternate approach, we need to switch our RecyclerView from Linear style to Grid style. The solution is simple. Let’s make the user scroll only in one direction. If the user is scrolling vertically, let him do that alone, and vice versa. This might sound brutal, but trust me; this will benefit the user a lot. Consider the image below. This type of visual representation will allow him to scroll only vertically. When you switch things to GridView, the visibility of items are clear and the user will not be thinking further. Consider the image below. Apart from switching to grid view, I’ve also tweaked the last grid item to notify the user that they still have more items to see. Data overload problem This problem is usually seen in news feeds, image listings, and chat interfaces. When we have so much data to view and process, the user will get confused. Though each item in the list will have a timestamp saying when this post or message is created or delivered, the problem is that the user has to notice the timestampand each item when he is scrolling. To solve this problem, almost all top-notch apps are using headers for identifying and sorting things out. Headers are indeed a beautiful solution for the data overload problem. Headers will speak to a user like “Hey user! You’re now entering my cave. Till the next guy speaks, all of the things below belong to me” — that’s good. Let’s take Google’s Inbox app. Inbox has used headers effectively, but using just the header alone has got some problems as well. Imagine the items under a particular header are long and the user has gone away from the app. Now when he comes back, he will not be remembering the section he was in. To solve this problem, we have sticky headers. These headers will hold the context throughout the section and the user will haveno trouble identifying the section. TabLayout best practices Tabs make it easy to explore and switch between different views. Tabs enable content organization at a high level, such as switching between views, data sets, or functional aspects of an app. Present tabs as a single row above their associated content. Tab labels should succinctly describe the content within because swipe gestures are used for navigating between tabs;they don’t pair tabs with content that also support swiping. Nested tab problem Even in a few best selling apps, I was able to spot the nested tabs issue. The images below will show what nesting of tabs is. Initially, by seeing the nested tabs, a majority of users get confused withthe navigation. But they get used to it in a while. Even an old version of one of the Google apps had this issue. Later they changed their way of classifying things.The best way to solve this nested tab problem is to find an alternative way to categorize things and you can also couple TabLayout with Bottom Navigation. Hopefully you’ve gained a few best practices for designing better apps. If you’ve liked this article please share. About the Author  HariVigneshJayapalan is a Google Certified Android App developer, IDF Certified UI &UX Professional, street magician, fitness freak, technology enthusiast, and wanna-be entrepreneur.
Read more
  • 0
  • 0
  • 3202

article-image-how-retain-your-team
Xavier Bruhiere
11 Apr 2017
6 min read
Save for later

How to Retain Your Team

Xavier Bruhiere
11 Apr 2017
6 min read
The job market for software engineers is remarkable. Go ahead and Google something like "software engineer shortage," and you will find plenty of articles trying to make sense of what is going on in the tech world. This Quora thread seems to agree on the lack of talented software engineers. Given the number of articles about hiring (here, here, or here), it looks as if indeed we, as an industry, haven't figured out how to match ambitious projects with the right people. As a result, startup founders, HR, agencies, and the like hunt engineers. All day long, in all timezones. And the member of your team that sits right next to you has probably received an offer lately. You hired him, you mentored him, and he is now an important part of the implicit knowledge behind the products. So now, what can you do to keep him committed to your company's goals? Why they leave Exciting projects Let's try to highlight the reasons that could make a developer leave. I just had lunch with a CTO that tried to hire a senior backend engineer. He wants him to lead the scaling of a machine-learning platform and the team of a project that just raised 6 million euros. You guessed it; the first thing that came to my mind was "whoa, the market is awesome!" There are so many startups and even freelance jobs that promote great projects. You can find a meaningful end to your work (boost education), something that appeals to your hobbies (upgrade how we listen to music), an incredibly challenging environment (monitor all the things), and so much more. Career growth But your teamates might also be more pragmatic and just want to make progress in their lives: a better salary abroad, a different lifestyle, management responsibilities, and so on. There is a reason that fits every ambition, and we should probably agree that it is fair game, but we will get to that later. The problem is right here This point is the most important one to you. While any serious engineer should continuously challenge himself, this is even more important when others rely on you. One can choose to leave because he sees no future in his current position, or maybe he no longer finds interest in his day-to-day tasks, or feels a lack of esteem. Or is it the culture of the company? These are all red flags that need to be addressed quickly, and that's what the next section of this blog post is about. Why they stay The culture is great Let's state something right now so we can focus on the interesting stuff. Yes, salary matters. Yes, 250k earnings a year in Silicon Valley makes a difference, but I fundamentally believe (from the engineers I have met, and not guesswork) that it is just a criterion. Most of the job offers we receive allow a comfortable life, so at the end of the day, we crave a great lifestyle. A place where we feel confident, that fits the work/life balance we need, and where we think we fit with comitted coworkers. And it shows on the job boards. Startups try to attract high performers with human values like transparency, remote-friendly environments, nice offices, and anything it will take to make your magic thrive. Of course, it can be tricky. But in my humble opinion, you should try to let people get into their flow and expect the best from them. They are challenged Engineers like problems, so give them strong ones. They also love to learn, so always create challenges with different obstacles. And surround them with other brilliant minds that will bring new perspectives to their current understanding. It will also help them stay humble and improve if the job turns out to be too difficult. It takes a lot of attention to not have teamates bored by tasks that are too easy or by feeling like a failure because they are stuck. So pay attention to their struggles during your morning stand-up, and make sure their highly paid and highly trained brains get what they deserve. There is a mean Finally, don't blow your effort by not showing them how their solutions fit in the end product. Promote the why: the dent in the universe they made. They will feel (and be) valuable. They will also give the best of their skills when tackling the next issue. When you know you are building something customers want or people need, you commit to great work and even accept temporary compromises. Too many times, I saw features that didn't make it to production because way too late the product owner came to realise it wouldn't generate any value for the end user. As a leader, make sure your team is working to grow the business, and make them stand out in front of your peers. How to let it go This is fair game After all of those efforts, all the knowledge transmitted, all the sweat put in beautiful architectures, developers leave anyway. And sometimes, yes, it is just fair game. I mentioned a lot of excellent reasons for an engineer to move to another adventure, and your last move is this: do not hold a grudge against them. A company is just a company, and you can't blame someone who puts first a personal passion or his significant other, or managed to catch a wonderful opportunity. Keep in touch Instead, do yourself a favor and be supportive. The ecosystem of talented people and great projects is not that big, so encourage your best coworkers in their evolution. Your future self will thank you when your solid networks of smart brains will help you in your own career. Open source trends show how much we share between techies, so start being someone other people will want to help back. Learn I apologize for ending with this obvious truth, but I have to: learn from everything that led you here. Take a step back and ask yourself if something went wrong. Nothing ? Maybe you can improve anyway. The hiring process? The on-boarding? Career evolution? One-to-one meetings? Time allocated to learn? Explore all those parameters, ask your peers, and experiment. Take care of your teammates' progress, be transparent and helpful, and they will keep supporting you.    About the author Xavier Bruhiereis a senior data engineer at Kpler. He is a curious and sharp entrepreneur and engineer who has built many projects, broken most of them, and launched and scaled what was left, learning from them all. 
Read more
  • 0
  • 0
  • 3191
Modal Close icon
Modal Close icon