Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-tim-berners-lees-solid-trick-or-treat
Natasha Mathur
31 Oct 2018
2 min read
Save for later

Tim Berners-Lee’s Solid - Trick or Treat?

Natasha Mathur
31 Oct 2018
2 min read
Solid is a set of conventions and tools developed by Tim Berners-Lee. It aims to build decentralized social applications based on Linked Data principles. It is modular, extensible and it relies as much as possible on existing W3C standards and protocols. This open-source project was launched earlier this month for “personal empowerment through data”. Why are people excited about Solid? Solid aims to radically transform the way Web applications work today, resulting in true data ownership as well as improved privacy. It hopes to empower individuals, developers, and businesses across the globe with completely new ways to build innovative and trusted applications. It gives users the freedom to choose where their data resides and who is allowed to access it. Solid collects all the data into a “Solid POD,” a personal online data repository, that you want to share with advertisers or apps. You get to decide which app gets your data and which does not.  Best thing is that you don’t need to enter any data in apps that support Solid. You can just allow or disallow access to the Solid POD, and the app will take care of the rest on its own. Moreover, Solid also offers every user a choice regarding where their data gets stored, and which specific people or groups can access the select elements in a data. Additionally, you can link to and share the data with anyone, be it your family, friends or colleagues. Is Solid a trick or a treat? That being said, a majority of the companies on the web are extremely sensitive when it comes to their data and might not be interested in losing control over that data. Hence, wide adoption seems to be a hurdle as of now. Also, since its only launched this month, there isn’t enough community support around it. However, Solid is surely taking us a step ahead, to a more free and open Internet, and seems to be a solid TREAT (pun intended) for all of us. For more information on Solid, check out the official Inrupt blog.
Read more
  • 0
  • 0
  • 13906

article-image-year-python
Sam Wood
04 Jan 2016
4 min read
Save for later

The Year of the Python

Sam Wood
04 Jan 2016
4 min read
When we asked developers for our $5 Skill Up report what the most valuable skill was in 2015, do you know what they said? Considering the title of this blog and the big snake image, you can probably guess. Python. Python was the most valuable skill they learned in 2015. But 2015 is over - so what did developers say they're hoping to learn from scratch, or increase their skills in in 2016? Correct guess again! It's Python. Despite turning 26 this Christmas (it's the same age as Taylor Swift, you know), the language is thriving. Set to be the most widely adopted new language for two years running is impressive. So why are people flocking to it? Why are we living in the years of the Python? There are three main reasons. 1. It's being learned by non-developers In the Skill Up survey, the people who were most likely to mention Python as a valuable skill that they learned also did not tend to describe themselves as traditional software developers. The job role most likely to be learning Python were 'Academics', followed by analysts, engineers, and people in a non-IT related role. These aren't the people who live to code - but they are the people who are likely finding the ability to program an increasingly useful professional skill. Rather than working with software every day, they are using Python to perform specific and sophisticated tasks. Much like knowledge of working in Microsoft became the essential office skill of the Nineties/Noughties, it looks like Python is becoming the language of choice for those who know they need to be able to code but don't necessarily define themselves as solely working in dev or IT. 2. It's easy to pick up I don't code. When I talked to my friends who did code, mumbling about maybe learning and looking for suggestions, they told me to learn Python. One of their principal reasons was because it was so bloody easy! This also ties in heavily to why we see Python being adopted by non-developers. Often being learned as a first programming language, the speed and ease with which you can pick up Python is a boon - even with minimal prior exposure to programming concepts. With much less of an emphasis on syntax, there's less chance of tripping up with missing parentheses or semicolons than with more complex languages. Originally designed (and still widely used) as a scripting language, Python has become extremely effective for writing standalone programs. The shorter learning curve means that new users will find themselves creating functioning and meaningful programs in a much shorter period of time than with, say, C or Java. 3. It's a good all-rounder Python can do a ton. From app development, to building games, to its dominance of data analysis, to its continued colonization of JavaScript's sovereign territory of web development through frameworks like Django and Flask, it's a great language for anyone who wants to learn something non-specialized. This isn't to say it's a Jack of All Trades, Master of None, however. Python is one of the key languages of scientific computing, aided by fast (and C-based) libraries like NumPy. Indeed, the strength of Python's versatility is the power of its many libraries to allow it to specialize so effectively. Welcoming Our New Python Overlords Python is the double threat - used across the programming world by experienced and dedicated developers, and extensively and heartily recommended as the first language for people to pick up when they start working with software and coding. By combining ease-of-entry with effectiveness, it's come to stand as the most valuable tech skill to learn for the middle of the decade. How many years of the Python do you think lie ahead?
Read more
  • 0
  • 0
  • 13896

article-image-internet-things-or-internet-thieves
Oli Huggins
07 Jan 2014
3 min read
Save for later

Internet of Things or Internet of Thieves

Oli Huggins
07 Jan 2014
3 min read
While the Internet of Things(IoT) sounds like some hipster start-up from the valley, it is in actual fact sweeping the technology world as the next big thing and is the topic of conversation (and perhaps development) through the majority of the major league tech titans. Simply, the IoT is the umbrella term for IP-enabled every day devices with the ability to communicate over the Internet. Whether that is your fridge transmitting temperature readings to your smartphone, or your doorbell texting you once it has been rung, anything with power (and even some without) can be hooked up to the World Wide Web and be accessed anywhere, anytime. This will of course have a huge impact on consumer tech, with every device under the sun being designed to work with your smartphone or PC, but whatäó_s worryingis how all this is going to be kept secure. While there are a large number of industry leading brands we can all trust (sometimes), there are an even bigger number of companies shipping devices out of China at extremely low production (and quality) costs. This prompts the questionäóñif the companyäó_s mantra is low cost products and mass sales, do they have the time, money (or care) to have an experienced security team and infrastructure to ensure these devices are secure? Iäó_m sure you know the answer to that question. Unconvinced? How about TrendNetcams back in 2012äó_ The basic gist was that a flaw in the latest firmware enabled you to add /anony/mjpg.cgi to the end of one of the camsäó_ IP addresses and you would be left with a live stream of the IP camera. Scary stuff (and some funny stuff) but this was a huge mistake made by what seems to be a fairly legitimate company. Imagine this on a much larger scale, with many more devices, being developed by much more dubious companies. Want a more up-to-date incident? How about a hacker gaining access to a Foscom IP camera that a couple was using to watch over their child, and the hacker screaming "Wake up, baby! Wake up, baby!äó_ Iäó_ll leave you to read more about that. With the suggestion that by 2020 anywhere between 26 and 212 billion devices will be connected to the Internet, this opens up an unimaginable amount of attack vectors, which will be abused by the black hats among us. Luckily, chip developers such as Broadcom have seen the payoff here by developing chips with a security infrastructure designed for wearable tech and the IoT. The newBCM20737 SoC provides äó_ Bluetooth, RSA encryption and decryption capabilities, and Appleäó_s iBeacon device detection technologyäó_ adding another layer of security that will be of interest to most tech developers. Whether the cost of such technology will appeal to all though is another thing altogetheräóîlow cost tech developers will just not bother. Now, I see the threat of someone hacking your toaster and burning your toast is not something you would worry about, but imagine healthcare implants or house security being given the IoT treatment. Not sure Iäó_d want someone taking control of my pacemaker or having a skeleton key to my house! Security is one of the major barriers to total adoption of the IoT, but is also the only barrier that can be jumped over and forgotten about by less law abiding companies. If I were to give anyone any advice before äó_connectingäó_, it would be to spend your money wisely, donäó_t go cheap, and avoid putting yourself in compromising situations around your IoT tech.
Read more
  • 0
  • 0
  • 13887

article-image-how-develop-game-concept
Raka Mahesa
18 Sep 2017
5 min read
Save for later

How to develop a game concept

Raka Mahesa
18 Sep 2017
5 min read
You may have an idea or a concept for a game and you may like to make a full game based on that concept. Congratulations, you're now taking the first step in the game development process. But you may be unsure of what to do next with your game concept. Fortunately, that’s what we’re here to discuss this.  How to find inspiration for a game idea A game idea or concept can come from a variety of places. You may be inspired by another medium, such as a film or a book, you may have had an exciting experience and want to share it with others, you may be playing another game and think you can do better, or you may just have a sudden flash of inspiration out of nowhere. Because ideas can come from a variety of sources, they can take on a number of different forms and robustness. So it's important to take a step back and have another look at this idea of yours.  How to create a game prototype  What should you do after your game concept has been fleshed out? Well, the next step is to create a simple prototype based on your game concept to see if it is viable and actually fun to play.  Wait, what if this is your first foray into game development and you barely have any programming skill? Well, fortunately, developing a game prototype is a good entry to the world of programming. There are many game development tools out there like GameMaker, Stencyl, and Construct 2 that can help you quickly create a prototype without having to write too many lines of code. These tools are so useful that even seasoned programmers use them to quickly build a prototype.  Should I use a game engine to prototype?  Should you use full-featured, professional game engines for making a prototype? Well, it's completely up to you, but one of the purposes of making a prototype is to be able to test out your ideas easily, so when the idea doesn't work out, you can tweak it quickly. With a full-featured game engine, even though it's powerful, it may take longer to complete simple tasks, and you end up not being able to iterate on your game quick enough.  That's also why most game prototypes are made with just simple shapes or very simple graphics. Creating those kinds of graphics doesn't take a lot of time and allows you to iterate on your game concept quickly. Imagine you're testing out a game concept and found out that enemies that just randomly hop around aren't fun, so you decide to make those enemies simply run on the ground. If you're just using a red square for your hopping enemies, you can use the same square for running enemies. But if you're using, say, frog images for those enemies, you will have to switch to a different image when you want the enemies to run. Why is prototyping so important in game development?  You may wonder why the emphasis is on creating a prototype instead of building the actual game. After all, isn't fleshing out a game concept supposed to make sure the game is fun to play? Well, unfortunately, what seems fun in theory may not be actually fun in practice. Maybe you thought that having a jump stamina would make things more exciting for a player, but after prototyping such a system, you may discover that it actually slow things down and makes the game less fun.  Also, prototyping is not just useful for measuring a game's fun, it's also useful for making sure the player has the kinds of experiences that the game concept wants to deliver. Maybe you have this idea of a game where the hero fights many enemies at once so the player can experience an epic battle. But after you prototyped it, you found out that the game felt chaotic instead of epic. Fortunately with a prototype you can quickly tweak the variables of your enemies to make the game feel more epic and less chaotic.  Using simple graphics  Using simple graphics is important for a game prototype. If players can have a good experience with a prototype that uses simple graphics, imagine the fun they'll have with the final graphics. Simple graphics are good because the experience the player feels is due to the game's functions, and not because of how the game looks.  Next steps  After you're done building the prototype and have proven that your game concept is fun to play, you can move on to the next step in the game development process. Your next step depends on the sort of game you want to make. If it's a massive game with many systems, you might want to create a proper game design document that includes how you want to expand the mechanics of your game. But if the game is on the small side with simple mechanics, you can start building the final product and assets.  Good luck on your game development journey! Raka Mahesa is a game developer at Chocoarts (http://chocoarts.com/), who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 13875

article-image-google-daydream
RakaMahesa
15 Mar 2017
5 min read
Save for later

Google Daydream

RakaMahesa
15 Mar 2017
5 min read
Google Cardboard, with more than 5 million users, is a success for Google. So, it's not a big surprise when Google announced their next step into the world of virtual reality with the evolution of Google Cardboard: the Google Daydream, a more robust and enhanced mobile VR platform.  So, what is Google Daydream? Is it just a better version of Google Cardboard? How does it differ from Google Cardboard?  Well, they are both platforms for mobile VR apps that can be viewed in a mobile headset. Unlike Cardboard though, Google Daydream has a set of specifications that mobile devices and headsets must follow. This means developers would know exactly what kind of input the user of their apps would have, something that wasn’t possible on the Cardboard platform.  The biggest and the most notable feature of Google Daydram compared with Cardboard however, is the addition of a motion-based controller. Users will now be able to use this remote-like controller to point and interact with the virtual world much more intuitively. With this controller, developers would be able to build a better and more immersive VR experience.  As can be seen in the image above, there are 4 physical inputs available to the user on the Daydream Controller: Touchpad (the big circular pad) App Button (the button with the line symbol) Home button (the button with the circle symbol) Volume buttons (the buttons on the side) And since it's a motion-based controller, the controller comes with various sensors to detect the user's hand movement. Do note that the movement that can be detected by the controller is mostly limited to rotational movement, unlike the fully positional tracked controller on a PC VR platform.  Two more things to keep in mind: the first one is that the home and volume buttons are not accessible to developers and are reserved for the platform's functionality. The second one is that the touchpad is only capable of detecting single touch. And since the documentation doesn't mention multitouch being added in the future, it's safe to assume that the controller is designed for single touch and will stay that way for the foreseeable future.  All right, now that we know about what the controller can do, let's dive deeper into the Google Daydream SDK and figure out how to use the Daydream Controller in our apps.  Before we go further though, let's make sure we have all the requirements for developing Daydream apps: Unity 5.6 (with native Daydream support) Google VR SDK for Unity v1.2 Daydream Controller or an Android phone with Gyroscope. Yes, you don't have to own the controller to develop a controller-compatible app, so don't fret. Instead. we're going to emulate the daydream controller using an Android phone. To do that, all we need to do is to install the controller emulator APK on our phone and run the emulator app. Then, to enable the emulator to be detected on Unity Editor, we simply connect the phone to the computer with a USB cable.  Do note that we can't connect the actual Daydream Controller to our computer and will only be able to use the controller when it's paired to a mobile phone. So you may want to use the emulator for testing purposes even if you have the controller.  To start reading user input from the controller, we first must add the GvrControllerMain prefab to our scene. Afterwards, we can simply use the GvrController API to detect any user interaction with the device. The GvrController API behaves similarly to Unity's Input API, so you're in luck if you're familiar with the Unity project.  Like the Unity Input API, there are three functions to use if we want to find out the state of the buttons on the controller. Use the GvrController.ClickButtonDown property to check if the touchpad was just clicked, the GvrController.ClickButtonUp property to check if the touchpad was just released, and the GvrController.ClickButton property to see if the user is holding down the touchpad click. Simply replace the "ClickButton" part with "AppButton" to detect the state of the app button on the controller.  The API for the controller's touchpad is similar to the Unity Mouse Input API as well. First, we need to find out if the touchpad is being used by calling the GvrController.IsTouching property. Then, we can read the touch position with GvrController.TouchPos property. There is no function for detecting swipes and other movements, but you should be able to create your own detector by reading the touch position changes.  For traditional controllers, these properties should be enough to get all the user inputs. However, Daydream Controller is a controller for VR, so there's still another aspect we should read: Movement. Using the GvrController.Orientation property, we can get a rotational value of the controller's orientation in the real world. We can then apply that value to a GameObject in our scene and have it mirror the movement of the controller.  And that's it for our introduction to the Daydream Controller. The world of virtual reality is still vast and unexplored, and every day, new ways to interact with the VR world are being tried out.So, keep experimenting!  About the author  Raka Mahesa is a game developer at Chocoarts who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 13855

article-image-blockchain-tools
Aaron Lazar
23 Oct 2017
7 min read
Save for later

"My Favorite Tools to Build a Blockchain App" - Ed, The Engineer

Aaron Lazar
23 Oct 2017
7 min read
Hey! It’s great seeing you here. I am Ed, the Engineer and today I’m going to open up my secret toolbox and share some great tools I use to build Blockchains. If you’re a Blockchain developer or a developer-to-be, you’ve come to the right place! If you are not one, maybe you should consider becoming one. “There are only 5,000 developers dedicated to writing software for cryptocurrencies, Bitcoin, and blockchain in general. And perhaps another 20,000 had dabbled with the technology, or have written front end applications that connect with the blockchain.” - William Mougayar, The Business Blockchain Decentralized apps or dapps, as they are fondly called, are serverless applications that can be run on the client-side, within a blockchain based distributed network. We’re going to learn what the best tools are to build dapps and over the next few minutes, we’ll take these tools apart one by one. For a better understanding of where they fit into our development cycle, we’ll group them up into stages - just like the buildings we build. So, shall we begin? Yes, we can!! ;) The Foundation: Platforms The first and foremost element for any structure to stand tall and strong is its foundation. The same goes for Blockchain apps. Here, in place of all the mortar and other things, we’ve got Decentralized and Public blockchains. There are several existing networks on the likes of Bitcoin, Ethereum or Hyperledger that can be used to build dapps. Ethereum and Bitcoin are both decentralized, public chains that are open source, while Hyperledger is private and also open source. Bitcoin may not be a good choice to build dapps on as it was originally designed for peer-to-peer transactions and not for building smart contracts. The Pillars of Concrete: Languages Now, once you’ve got your foundation in place, you need to start raising pillars that will act as the skeleton for your applications. How do we do this? Well, we’ve got two great languages specifically for building dapps. Solidity It’s an object-oriented language that you can use for writing smart contracts. The best part of Solidity is that you can use it across all platforms - making it the number one choice for many developers to use. It’s a lot like JavaScript and way more robust than other languages. Along with Solidity, you might want to use Solc, the compiler for Solidity. At the moment, Solidity is the language that’s getting the most support and has the best documentation. Serpent Before the dawn of Solidity, Serpent was the reigning language for building dapps. Something like how bricks replaced stone to build massive structures. Serpent though is still being used in many places to build dapps and it has great real-time garbage collection. The Transit Mixers: Frameworks After you choose your language to build dapps, you need a framework to simplify the mixing of concrete to build your pillars. I find these frameworks interesting: Embark This is a framework for Ethereum you can use to quicken development and to streamline the process by using tools or functionalities. It allows you to develop and deploy dapps easily, or even build a serverless HTML5 application that uses decentralized technology. It equips you with tools to create new smart contracts which can be made available in JavaScript code. Truffle Here is another great framework for Ethereum, which boasts of taking on the task of managing your contract artifacts for you. It includes support for the library that links complex Ethereum apps and provides custom deployments. The Contractors: Integrated Development Environments Maybe, you are not the kind that likes to build things from scratch. You just need a one-stop place where you can tell what kind of building you want and everything else just falls in place. Hire a contractor. If you’re looking for the complete package to build dapps, there are two great tools you can use, Ethereum Studio and Remix (Browser-Solidity). The IDE takes care of everything - right from emulating the live network to testing and deploying your dapps. Ethereum Studio This is an adapted version of Cloud9, built for Ethereum with some additional tools. It has a blockchain emulator called the sandbox, which is great for writing automated tests. Fair warning: You must pay for this tool as it’s not open source and you must use Azure Cloud to access it. Remix  This can pretty much do the same things that Ethereum Studio can. You can run Remix from your local computer and allow it to communicate with an Ethereum node client that’s on your local machine. This will let you execute smart contracts while connected to your local blockchain. Remix is still under development during the time of writing this article. The Rebound Hammer: Testing tools Nothing goes live until it’s tried and tested. Just like the rebound hammer you may use to check the quality of concrete, we have a great tool that helps you test dapps. Blockchain Testnet For testing purposes, use the testnet, an alternative blockchain. Whether you want to create a new dapp using Ethereum or any other chain, I recommend that you use the related testnet, which ideally works as a substitute in place of the true blockchain that you will be using for the real dapp. Testnet coins are different from actual bitcoins, and do not hold any value, allowing you as a developer or tester to experiment, without needing to use real bitcoins or having to worry about breaking the primary bitcoin chain. The Wallpaper: dapp Browsers Once you’ve developed your dapp, it needs to look pretty for the consumers to use. Dapp browsers are mostly the User Interfaces for the Decentralized Web. Two popular tools that help you bring dapps to your browser are Mist and Metamask. Mist  It is a popular browser for decentralized web apps. Just as Firefox or Chrome are for the Web 2.0, the Mist Browser will be for the decentralized Web 3.0. Ethereum developers would be able to use Mist not only to store Ether or send transactions but to also deploy smart contracts. Metamask  With Metamask, you can comfortably run dapps in your browser without having to run a full Ethereum node. It includes a secure identity vault that provides a UI to manage your identities on various sites, as well as sign blockchain contracts. There! Now you can build a Blockchain! Now you have all the tools you need to make amazing and reliable dapps. I know you’re always hungry for more - this Github repo created by Christopher Allen has a great listing of tools and resources you can use to begin/improve your Blockchain development skills. If you’re one of those lazy-but-smart folks who want to get things done at the click of a mouse button, then BaaS or Blockchain as a Service is something you might be interested in. There are several big players in this market at the moment, on the likes of IBM, Azure, SAP and AWS. BaaS is basically for organizations and enterprises that need blockchain networks that are open, trusted and ready for business. If you go the BaaS way, let me warn you - you’re probably going to miss out on all the fun of building your very own blockchain from scratch. With so many banks and financial entities beginning to set up their blockchains for recording transactions and transfer of assets, and investors betting billions on distributed ledger-related startups, there are hardly a handful of developers out there, who have the required skills. This leaves you with a strong enough reason to develop great blockchains and sharpen your skills in the area. Our Building Blockchain Projects book should help you put some of these tools to use in building reliable and robust dapps. So what are you waiting for? Go grab it now and have fun building blockchains!
Read more
  • 0
  • 2
  • 13620
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-using-meta-learning-nonstationary-competitive-environments-pieter-abbeel-et-al
Sugandha Lahoti
15 Feb 2018
5 min read
Save for later

Using Meta-Learning in Nonstationary and Competitive Environments with Pieter Abbeel et al

Sugandha Lahoti
15 Feb 2018
5 min read
This ICLR 2018 accepted paper, Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments, addresses the use of meta-learning to operate in non-stationary environments, represented as a Markov chain of distinct tasks. This paper is authored by Pieter Abbeel, Maruan Al-Shedivat, Trapit Bansal, Yura Burda, Ilya Sutskever, and Igor Mordatch. Pieter Abbeel is a professor at UC Berkeley since 2008. He was also a Research Scientist at OpenAI (2016-2017). His current research focuses on robotics and machine learning with particular focus on meta-learning and deep reinforcement learning. One of the other authors of this paper, Ilya Sutskever is the co-founder and Research Director of OpenAI. He was also a Research Scientist at the Google Brain Team for 3 years. Meta-Learning, or alternatively learning to learn, typically uses metadata to understand how automatic learning can become flexible in solving learning problems, i.e. to learn the learning algorithm itself. Continuous adaptation in real-world environments is quite essential for any learning agent and meta-learning approach is an appropriate choice for this task. This article will talk about one of the top accepted research papers in the field of meta-learning at the 6th annual ICLR conference scheduled to happen between April 30 - May 03, 2018. Using a gradient-based meta-learning algorithm for Nonstationary Environments What problem is the paper attempting to solve? Reinforcement Learning algorithms, although achieving impressive results ranging from playing games to applications in dialogue systems to robotics, are only limited to solving tasks in stationary environments. On the other hand, the real-world is often nonstationary either due to complexity, changes in the dynamics in the environment over the lifetime of a system, or presence of multiple learning actors. Nonstationarity breaks the standard assumptions and requires agents to continuously adapt, both at training and execution time, in order to succeed. The classical approaches to dealing with nonstationarity are usually based on context detection and tracking i.e., reacting to the already happened changes in the environment by continuously fine-tuning the policy. However, nonstationarity allows only for limited interaction before the properties of the environment change. Thus, it immediately puts learning into the few-shot regime and often renders simple fine-tuning methods impractical. In order to continuously learn and adapt from limited experience in nonstationary environments, the authors of this paper propose the learning-to-learn (or meta-learning) approach. Paper summary This paper proposes a gradient-based meta-learning algorithm suitable for continuous adaptation of RL agents in nonstationary environments. The agents meta-learn to anticipate the changes in the environment and update their policies accordingly. This method builds upon the previous work on gradient-based model-agnostic meta-learning (MAML) that has been shown successful in the few shot settings. Their algorithm re-derive MAML for multi-task reinforcement learning from a probabilistic perspective, and then extends it to dynamically changing tasks. This paper also considers the problem of continuous adaptation to a learning opponent in a competitive multi-agent setting and have designed RoboSumo—a 3D environment with simulated physics that allows pairs of agents to compete against each other. The paper answers the following questions: What is the behavior of different adaptation methods (in nonstationary locomotion and competitive multi-agent environments) when the interaction with the environment is strictly limited to one or very few episodes before it changes? What is the sample complexity of different methods, i.e., how many episodes are required for a method to successfully adapt to the changes? Additionally, it answers the following questions specific to the competitive multi-agent setting: Given a diverse population of agents that have been trained under the same curriculum, how do different adaptation methods rank in a competition versus each other? When the population of agents is evolved for several generations, what happens with the proportions of different agents in the population? Key Takeaways This work proposes a simple gradient-based meta-learning approach suitable for continuous adaptation in nonstationary environments. This method was applied to nonstationary locomotion and within a competitive multi-agent setting—the RoboSumo environment. The key idea of the method is to regard nonstationarity as a sequence of stationary tasks and train agents to exploit the dependencies between consecutive tasks such that they can handle similar nonstationarities at execution time. In both cases, i.e meta-learning algorithm and the multi-agent setting,  meta-learned adaptation rules were more efficient than the baselines in the few-shot regime. Additionally, agents that meta-learned to adapt, demonstrated the highest level of skill when competing in iterated games against each other. Reviewer feedback summary Overall Score: 24/30 Average Score: 8 The paper was termed as a great contribution to ICLR. According to the reviewers, the paper addressed a very important problem for general AI and was well-written. They also appreciated the careful experiment designs, and thorough comparisons making the results convincing. They found that editorial rigor and image quality could be better. However, there was no content related improvements suggested. The paper was appreciated for being dense and rich on rapid meta-learning.
Read more
  • 0
  • 0
  • 13596

article-image-top-5-devops-tools-increase-agility
Darrell Pratt
14 Oct 2016
7 min read
Save for later

Top 5 DevOps Tools to Increase Agility

Darrell Pratt
14 Oct 2016
7 min read
DevOps has been broadly defined as a movement that aims to remove the barriers between the development and operations teams within an organization. Agile practices have helped to increase speed and agility within development teams, but the old methodology of throwing the code over the wall to an operations department to manage the deployment of the code to the production systems still persists. The primary goal of the adoption of DevOps practices is to improve both the communication between disparate operations and development groups, and the process by which they work. Several tools are being used across the industry to put this idea into practice. We will cover what I feel is the top set of those tools from the various areas of the DevOps pipeline, in no particular order. Docker “It worked on my machine…” Every operations or development manager has heard this at some point in their career. A developer commits their code and promptly breaks an important environment because their local machine isn’t configured to be identical to a larger production or integration environment.  Containerization has exploded onto the scene and Docker is at the nexus of the change to isolate code and systems into easily transferable modules. Docker is used in the DevOps suite of tools in a couple of methods. The quickest win is to first use Docker to provide the developers with easily useable containers that can mimic the various systems within the stack. If a developer is working on a new RESTful service, they can checkout the container that is setup to run Node.js or Spring Boot, and write the code for the new service with the confidence that the container will be identical to the service environment on the servers. With the success of using Docker in the development workflow, the next logical step is to use Docker in the build stage of the CI/CD pipeline. Docker can help to isolate the build environment’s requirements across different portions of the larger application. By containerizing this step, it is easy to use one generic pipeline to build components as different spanning from Ruby and Node.js to Java and Golang. Git & JFrog Artifactory Source control and artifact management acts as afunnel for the DevOps pipeline. The structure of an organization can dictate how they run these tools, be it hosted or served locally. Git’s decentralized source code management and high-performance merging features have helped it to become the most popular tool in version control systems. Atlassian BitBucket and GitHub both provide a good set of tooling around Git and are easy to use and to integrate with other systems. Source code control is vital to the pipeline, but the control and distribution of artifacts into the build and deployment chain is important as well. >Branching in Git Artifactory is a one-stop shop for any binary artifact hosted within a single repository, which now supports Maven, Docker, Bower, Ruby Gems, CocoaPods, RPM, Yum, and npm. As the codebase of an application grows and includes a broader set of technologies, the ability to control this complexity from a single point and integrate with a broad set of continuous integration tools cannot be stressed enough. Ensuring that the build scripts are using the correct dependencies, both external and internal, and serving a local set of Docker containers reduces the friction in the build chain and will make the lives of the technology team much easier. Jenkins There are several CI servers to choose from in the market. The hosted set of tools such as Travis CI, Codeship, Wercker and Circle CI are all very well suited to drive an integration pipeline and each caters slightly better to an organization that is more cloud focused (source control and hosting), with deep integrations with GitHub and cloud providers like AWS, Heroku, Google and Azure. The older and less flashy system is Jenkins. fcommunity that is constantly adding in new integrations and capabilities to the product. The Jenkins Pipeline feature provides a text-based DSL for creating complex workflows that can move code from repository to the glass with any number of testing stages and intermediate environment deployments. The pipeline DSL can be created from code and this enables a good scaffolding setup for new projects to be integrated into the larger stack’s workflow >Pipeline example Jenkins has continued to nurture a large community that is constantly adding in new integrations and capabilities to the product. The Jenkins Pipeline feature provides a text-based DSL for creating complex workflows that can move code from repository to the glass with any number of testing stages and intermediate environment deployments. The pipeline DSL can be created from code and this enables a good scaffolding setup for new projects to be integrated into the larger stack’s workflow. Hashicorp Terraform At this point we have a system that can build and manage applications through the software development lifecycle. The code is hosted in Git, orchestrated through testing and compilation with Jenkins, and running in reliable containers, and we are storing and proxying dependencies in Artifactory. The deployment of the application is where the operations and development groups come together in DevOps. Terraform is an excellent tool to manage the infrastructure required for running the applications as code itself. There are several vendors in this space — Chef, Puppet and Ansible to name just a few. Terraform sits at a higher level than many of these tools by acting as more of a provisioning system than a configuration management system. It has plugins to incorporate many of the configuration tools, so any investment that an organization has made in one of those systems can be maintained. Load balancer and instance config Where Terraform excels is in its ability to easily provision arbitrarily complex multi-tiered systems, both locally or cloud hosted. The syntax is simple and declarative, and because it is text, it can be versioned alongside the code and other assets of an application. This delivers on “Infrastructure as Code.” Slack A chat application was probably not what you were expecting in a DevOps article, but Slack has been a transformative application for many technology organizations. Slack provides an excellent platform for fostering communication between teams (text, voice and video) and integrating various systems.  The DevOps movement stresses onremoval of barriers between the teams and individuals who work together to build and deploy applications. Web hooks provide simple integration points for simple things such as build notifications, environment statuses and deployment audits. There is a growing number of custom integrations for some of the tools we have covered in this article, and the bot space is rapidly expanding into AI-backed members of the team that answer questions about builds and code to deploy code or troubleshoot issues in production. It’s not a surprise that this space has gained its own name, ChatOps. Articles covering the top 10 ChatOps strategies will surely follow. Summary In this article, we covered several of the tools that integrate into the DevOps culture and how those tools are used and are transforming all areas of the technology organization. While not an exhaustive list, the areas that were covered will give you an idea of the scope of the space and how these various systems can be integrated together. About Darrell Pratt Darrell Pratt is a technologist who is responsible for a range of technologies at Cars.com, where he is the director of software development and delivery. He is passionate about technology and still finds time to write a bit of code and hack on hardware projects. Find him on Twitter here: @darrellpratt.
Read more
  • 0
  • 0
  • 13511

article-image-how-to-beat-cyber-interference-in-an-election-process
Guest Contributor
05 Sep 2018
6 min read
Save for later

How to beat Cyber Interference in an Election process

Guest Contributor
05 Sep 2018
6 min read
The battle for political influence and power is transcending all boundaries and borders. There are many interests at stake, and some parties, organizations, and groups are willing to pull out the “big guns” in order to get what they want. “Hacktivists” are gaining steam and prominence these days. However, governmental surveillance and even criminal (or, at the very least, morally questionable) activity can happen, too, and when it does, the scandal rises to the most relevant headlines in the world’s most influential papers. That was the case in the United States’ presidential election of 2016 and in France’s most recent process. Speaking of the former, the Congress and the Department of Investigations revealed horrifying details about Russian espionage activity in the heat of the battle between Democrat Hillary Clinton and Republican Donald Trump, who ended up taking the honors. As for the latter, the French had better luck in their quest to prevent the Russians to wreak havoc in the digital world. In fact, it wasn’t luck: it was due diligence, a sense of responsibility, and a clever way of using past experiences (such as what happened to the Americans) to learn and adjust. Russia’s objective was to influence the outcome of the process by publishing top secret and compromising conversations between high ranked officials. In their attempt to intervene the American elections, they managed to get in networks and systems controlled by the state to publish fake news, buy Facebook ads, and employ bots to spread the fake news pieces. How to stop cyber interference during elections Everything should start with awareness about how to avoid hacking attacks, as well as a smoother communication and integration between security layers. Since the foundation of it all is the law, each country needs to continually make upgrades to have all systems ready to avoid and fight cyber interference in the election and in all facets of life. Diplomatic relationships need to understand just how far a nation state can go in the case of defending their sovereignty against such crimes. Pundits and experts in the matter state that until the system is hacking-proof and can offer reliability, every state needs to gather and count hand votes as a backup to digital votes. Regarding this, some advocates recently told the Congress that the United States should implement paper ballots that are prepared to provide physical evidence of every vote, effectively replacing the unreliable and vulnerable machines currently used. According to J. Alex Halderman, who is a computer science teacher, this ballot might look “low tech” to the average eye, but they represent a “reliable and cost-effective defense.” Paying due attention to every detail Government authorities need to pay better attention to propaganda (especially Russian propaganda), because it may show patterns about the nation’s intentions. By now, we all know what the Russians are capable of, and figuring out their intentions would go a long way in helping the country prepare to future attacks in a better way. The American government may also require Russian media and social platforms to register under the FARA, which is the Foreign Agents Registration Act. That way, there will be a more efficient database about who is a foreign agent of influence. One of the most critical corrective measures to be taken in the future is prohibiting the chance of buying advertising that directly influences the outcome of certain processes and elections. Handing diplomatic sanctions just isn’t enough Lately, the US Congress, approved by president Trump, has been handing sanctions to people involved in the 2016 cyber attack. However, a far more effective measure to take would be enhancing cyber defense, because it can offer immediate detection of threats and is well-equipped to bring to an end any network intrusions. According to scientist Thomas Schelling, the fear of the consequences of any given situation can be a powerful motivator, but it can be difficult to deter individuals or organizations that can’t be easily tracked and identified, and act behind irrational national ideologies and political goals. Instead, adopting cyber defense can stop any intrusion in time and offer more efficient punishments. Active defense is legally viable and a very capable solution because it can disrupt the perpetrators outside networks. Enabling the “hack back” approach can allow countries to take justice into their own hands in case of any cyber attack attempt. The next step would be working on lowering the required threshold to enable this kind of response. Cyber defense is the way to go Cyber defense measures can be very versatile and have proven effectiveness. Take the example of France: in the most recent elections, French intelligence watched Russian cyber activity for the duration of the election campaign of Emmanuel Macron. Some strategies include letting the hackers steal fake files and documents, misleading them and making them waste their time. The cyber defense can also ensure to embed beacons that can disclose the attackers’ current location or mess with their networks. There is even a possibility of erasing stolen information. In the case of France, cyber defense specialists were one step ahead of the Russians: they made false email accounts and introduced numerous fake documents and files that discouraged the Russians. Known systems, networks, and platforms The automated capabilities of cyber defense can trump any malicious attempt or digital threat. For example, the LightCyber Magna platform can perceive big amounts of information. Such a system may have been able to stop Russian hackers from installing malware on the DMC (Democratic National Committee). Another cyber defense tool, the Palo Alto Network Traps, are known to block malware as strong as the WannaCry ransomware attack that encrypted more than 200,000 computers in almost a hundred countries. Numerous people lost their data or had to pay thousands of dollars to recover it. VPN: an efficient cybersecurity tool Another perfectly usable cyber defense tools are Virtual Private Networks. VPNs such as Surfshark can encrypt all traffic shared online, as well as the user’s IP address. They effectively provide anonymous browsing as well as privacy. Cyber defense isn’t just a luxury that just a handful of countries can afford: it is a necessity as a tool that helps combat cyber interference not only in elections but in every facet of life and international relationships. Author Bio Harold is a cybersecurity consultant and a freelance blogger. He's currently working on a cybersecurity campaign to raise awareness around the threats that businesses can face online. Top 5 cybersecurity myths debunked Skepticism welcomes Germany’s DARPA-like cybersecurity agency – The federal agency tasked with creating cutting-edge defense technology How cybersecurity can help us secure cyberspace
Read more
  • 0
  • 0
  • 13504

article-image-we-need-to-encourage-the-meta-conversation-around-open-source-says-nadia-eghbal
Richard Gall
24 Jul 2018
4 min read
Save for later

We need to encourage the meta-conversation around open source, says Nadia Eghbal [Interview]

Richard Gall
24 Jul 2018
4 min read
Two years ago, Nadia Eghbal put together a report with the Ford Foundation. Titled Roads and Bridges: The Unseen Labor Behind Our Digital Infrastructure, the report is one of the most important discussions on the role of open source software in business and society today. It needs to be read. In it, Eghbal writes: "Everybody relies on shared code to write software, including Fortune 500 companies, government, major software companies and startups. In a world driven by technology, we are putting increased demand on those who maintain our digital infrastructure. Yet because these communities are not highly visible, the rest of the world has been slow to notice." Nadia's argument is important for both engineers and the organizations that depend on them. It throws light on the literal labor that goes into building and maintaining software. At a time when issues of trust and blowout cast a shadow over the tech industry, Nadia's report couldn't be more important. It's time for the world to stop pretending software is magic - it requires hard work. Today, Nadia works for Protocol Labs. There, she continues her personal mission to explore and improve the relationship between who builds software and who needs it. I was lucky enough to speak to Nadia via email, where she told me her thoughts on the current state of open source in 2018. Open source software in 2018 Do you think there's a knowledge gap or some confusion around open source? If so, what might be causing it? Open source has been around for ~20 years now (and free software is much older than that), but I don't think we've fully acknowledged how much things have changed. Earlier concerns, like around licensing, are less salient today, because of all the great work that was done in the late 1990s and early 2000s. But there isn't really a coherent conversation happening around the needs or cultural shifts in modern open source today, like managing communities or finding the time and resources to work on projects. I think that's partly because "open source" is such an obvious term now that people affiliate with specific communities, like JavaScript or Ruby - so that means the meta-conversation around open source is happening less frequently. "Money is complicated in open source, especially given its decentralized nature" Your report was published in July 2016. Has anything changed since it was published? [caption id="attachment_20989" align="alignright" width="300"] Nadia Eghbal at Strange Loop 2017 (via commons.wikimedia.org)[/caption] Lots! When the report was first published, it wasn't commonly accepted that sustainability was an important topic in open source. Today, it's much more frequently discussed, with people starting research initiatives, conversations, and even companies around it. My views have evolved on the topic, too. Money is complicated in open source, especially given its decentralized nature, and it's closely tied to behavior and incentives. Understanding all of that as a complete picture takes time. "I'd like to see more developers advocate for company policies that encourage employees to contribute back to the open source they use." Getting developers to actively contribute to open source projects Following the arguments put forward in your report, do you think there any implications for working software engineers - either professionally or politically? I'd like to see more developers advocate for company policies that encourage employees to contribute back to the open source they use. Open source projects have become sort of productized as they've scaled, but it would be great to see more developers go from being passive users to active contributors. It's also great for working developers who want to show off their work in public. Similarly, are there any implications for businesses? Any software-enabled business is mostly running on public infrastructure, not proprietary code, anymore. It's in their best interest to get to know the people behind the code. Follow Nadia on Twitter: @nayafia Visit Nadia's website: nadiaeghbal.com
Read more
  • 0
  • 0
  • 13405
article-image-what-you-need-know-about-iot-product-development
Raka Mahesa
10 Oct 2017
5 min read
Save for later

What you need to know about IoT product development

Raka Mahesa
10 Oct 2017
5 min read
Software is eating the world. It's a famous statement made by Marc Andreessen back in 2011 about the rise of software companies and how software will disrupt many, many industries. Today, as we live among devices that run on smart software, that statement couldn't be more true. We live surrounded by dozens of devices that are connected to each other, as the Internet of Things slowly spreads throughout our world. Each year, a batch of new smart devices are introduced to the market, hoping to find a place in our connected lives.  Have you ever wondered though about how these smart devices are made? Are they a software project? Or are they actually a hardware project? What consideration do we need to think about when we're developing these products? With those questions in mind, let's take a further look into the product development of the Internet of Things.   Before we go on though, let's clarify the kind of product that we will be discussing. For this article, what counts as a product is a software or hardware project that was not made for personal use. The scale and complexity of the product doesn't really matter. It could be a simple connected camera network, it could be a brand new type of device that the world has never seen before, or it could be simply adding an analytical tool to a currently working device.  Working with hardware is expensive  Now that we have that cleared up, let's start with the first and most important thing you need to know about IoT product development: working with hardware is not only different from developing software, it's also more difficult and more expensive. In fact, the reason that so many startup companies are popping up these days, is because starting a software business is much cheaper than starting a hardware business. Before software was prevalent, it was much harder and costly to start a technology business.  Unlike software, hardware isn't easy to change. Once you're set to manufacture a particular hardware, there's no changing the end result, even if there's a mistake with your initial design. And even if your design is flawless, there could still be a problem with the material you're working with or even with the manufacturer themselves. So, when working with hardware, you need to be extra careful, because a single mistake could end up being exceptionally costly.  Fortunately, these days there are solutions that could alleviate those issues, like 3D printing. With 3D printing, we can cheaply produce our hardware design. That way, we can quickly evaluate the look and detect any issue with the hardware design without needing to go back and forth with the manufacturer. Do keep in mind that even with 3D printing, we still need to test our hardware with the actual, final material and manufacturing method.  Requirements and functionality are important  Another thing that you need to know about IoT product development is that you need to figure out the full requirement and functionality of your product very early on. Yes, when you're developing software, you also need to find out about the software requirement in the beginning, but it's a bit different with IoT, because it affects everything in the project.  You see, with software development, your toolkit is meant to be general and capable of dealing with most problems. For example, if you want to build a web application, then most of the time, the framework and language that you choose will be able to build the application that you want. The development environment for IoT doesn't work that way however, it is much more specific. A certain toolkit for IoT is meant to solve problems with certain conditions.  Coupled with the fact that IoT products have additional factors that need to be considered like power consumption, among others, choosing the right platform for the right project is a must. For example, if later in the project it was found out that you need more processing power than the one provided by your hardware platform, then you need to retool plenty of stuff.  Consider UI User interaction is another big thing you need to consider in IoT product development. A lot of devices don't have any screen or any complicated input method, so you need to figure out early how users will interact with your product. Should the user be able do any interaction right on the device? Or should the user interact with the device using their phones? Should the user be able to access the device remotely? These are all questions you need to answer before you can determine the component your product requires.  Consider connectivity  Speaking of remote access, connectivity is also another factor you will need to consider in IoT product development. While there are many ways for your product to connect to the Internet, you should also ask whether it makes sense for your product to have an Internet connection or not. Maybe your product will be placed in a spot where wireless connection doesn't reach. Maybe instead of via Internet, your product should be able to transfer its data and log whenever a storage device is connected with it.  There are a lot of things that you need to consider when you are developing products for the Internet of Things. The topics we discussed should provide you with a good place to start.  About the Author  Raka Mahesa is a game developer at Chocoarts (http://chocoarts.com/), who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 13388

article-image-these-are-the-best-machine-learning-conferences-in-2018
Richard Gall
12 Jun 2018
8 min read
Save for later

7 of the best machine learning conferences for the rest of 2018

Richard Gall
12 Jun 2018
8 min read
We're just about half way through the year - scary, huh? But there's still time to attend a huge range of incredible machine learning conferences in 2018. Given that in this year's Skill Up survey developers working every field told us that they're interested in learning machine learning, it will certainly be worth your while (and money). We fully expect this year's machine learning conference circuit to capture the attention of those beyond the analytics world. The best machine learning conferences in 2018 But which machine learning conferences should you attend for the rest of the year? There's a lot out there, and they're not always that cheap. Let's take a look at 10 of the best machine learning conferences for the rest of this year. AI Summit London When and where? June 12-14 2018, Kensington Palace and ExCel Center, London, UK. What is it? AI Summit is all about AI and business - it's as much for business leaders and entrepreneurs as it is for academics and data scientists. The summit covers a lot of ground, from pharmaceuticals to finance to marketing, but the main idea is to explore the incredible ways Artificial Intelligence is being applied to a huge range of problems. Who is speaking? According to the event's website, there are more than 400 speakers at the summit. The keynote speakers include a number of impressive CEOs including Patrick Hunger, CEO of Saxo Bank and Helen Vaid, Global Chief Customer Officer of Pizza Hut. Who's it for? This machine learning conference is primarily for anyone who would like to consider themselves a thought leader. Don't let that put you off though, with a huge number of speakers from across the business world it is a great opportunity to see what the future of AI might look like. ML Conference, Munich When and where? June 18-10, 2018, Sheraton Munich Arabella Park Hotel, Munich, Germany. What is it? Munich's ML Conference is also about the applications of machine learning in the business world. But it's a little more practical-minded than AI Summit - it's more about how to actually start using machine learning from a technological standpoint. Who is speaking? Speakers at ML Conference are researchers and machine learning practitioners. Alison Lowndes from NVIDIA will be speaking, likely offering some useful insight on how NVIDIA is helping make deep learning accessible to businesses; Christian Petters, solutions architect at AWS will also be speaking on the important area of machine learning in the cloud. Who's it for? This is a good conference for anyone starting to become acquainted with machine learning. Obviously data practitioners will be the core audience here, but sysadmins and app developers starting to explore machine learning would also benefit from this sort of machine learning conference. O'Reilly AI Conference, San Francisco When and where? September 5-7 2018, Hilton Union Square, San Francisco, CA. What is it? According to O'Reilly's page for the event, this conference is being run to counter those conferences built around academic AI research. It's geared (surprise, surprise) towards the needs of businesses. Of course, there's a little bit of aggrandizing marketing spin there, but the idea is fundamentally a good one. It's all about exploring how cutting edge AI research can be used by businesses. It's somewhere between the two above - practical enough to be of interest to engineers, but with enough blue sky scope to satisfy the thought leaders. Who is speaking? O'Reilly have some great speakers here. There's someone else making an appearance for NVIDIA - Gaurav Agarwal, who's heading up the company's automated vehicles project. There's also Sarah Bird from Facebook who will likely have some interesting things to say about how her organization is planning to evolve its approach to AI over the years to come. Who is it for? This is for those working at the intersection of business and technology. Data scientists and analysts grappling with strategic business questions, CTOs and CMOs beginning to think seriously about how AI can change their organization will all find something here. O'Reilly Strata Data Conference, New York When and where? September 12-13, 2018, Javits Centre, New York, NY. What is it? O'Reilly's Strata Data Conference is slightly more Big Data focused than its AI Conference. Yes it will look at AI and deep learning, but it's going to tackle those areas from a big data perspective first and foremost. It's more established than the AI Summit (it actually started back in 2012 as Strata + Hadoop World), so there's a chance it will have a slightly more conservative vibe. That could be a good or bad thing, of course. Who is speaking? This is one of the biggest Big Data conferences on the planet, As you'd expect the speakers are from some of the biggest organizations in the world, from Cloudera to Google and AWS. There's a load of names we could pick out, but one we're most excited about is Varant Zanoyan from Airbnb who will be talking about Zipline, Airbnb's new data management platform for machine learning. Who's it for? This is a conference for anyone serious about big data. There's going to be a considerable amount of technical detail here, so you'll probably want to be well acquainted with what's happening in the big data world. ODSC Europe 2018, London When and where? September 19-22, Novotel West, London, UK. What is it? The Open Data Science Conference is very much all about the open source communities that are helping push data science, machine learning and AI forward. There's certainly a business focus, but the event is as much about collaboration and ideas. They're keen to stress how mixed the crowd is at the event. From data scientists to web developers, academics and business leaders, ODSC is all about inclusivity. It's also got a clear practical bent. Everyone will want different things from the conference, but learning is key here. Who is speaking? ODSC haven't yet listed speakers on their website, simply stating on their website "our speakers include some of the core contributors to many open source tools, libraries, and languages". This indicates the direction of the event - community driven, and all about the software behind it. Who's it for? More than any of the other machine learning conferences listed here, this is probably the one that really is for everyone. Yes, it might be a more technical than theoretical, but it's designed to bring people into projects. Speakers want to get people excited, whether they're an academic, app developer or CTO. MLConf SF, San Francisco When and where? November 14 2018, Hotel Nikko, San Francisco, CA. What is it? MLConf has a lot in common with ODSC. The focus is on community and inclusivity rather than being overtly corporate. However, it is very much geared towards cutting edge research from people working in industry and academia - this means it has a little more of a specialist angle than ODSC. Who is speaking? At the time of writing, MLConf are on the look out for speakers. If you're interested, submit an abstract - guidelines can be found here. However, the event does have Uber's Senior Data Science Manager Franzisca Bell scheduled to speak, which is sure to be an interesting discussion on the organization's current thinking and challenges with huge amounts of data at its disposal. Who's it for? This is an event for machine learning practitioners and students. Level of expertise isn't strictly an issue - an inexperienced data analyst could get a lot from this. With some key figures from the tech industry there will certainly be something for those in leadership and managerial positions too. AI Expo, Santa Clara When and where? November 28-29, 2018, Santa Clara Convention Center, Santa Clara, CA. What is it? Santa Clara's AI Expo is one of the biggest machine learning conferences. With four different streams, including AI technologies, AI and the consumer, AI in the enterprise, and Data analytics for AI and IoT, the event organizers are really trying to make their coverage pretty comprehensive. Who is speaking? The event's website boasts 75+ speakers. The most interesting include Elena Grewel, Airbnb's Head of Data Science, Matt Carroll, who leads developer relations at Google Assistant, and LinkedIn's Senior Director of Dara Science, Xin Fu. Who is it for? With so much on offer this has wide appeal. From marketers to data analysts, there's likely to be something on offer. However, with so much going on you do need to know what you want to get out of an event like this - so be clear on what AI means to you and what you want to learn. Did we miss an important machine learning conference? Are you attending any of these this year? Let us know in the comments - we'd love to hear from you.
Read more
  • 0
  • 0
  • 13336

article-image-looking-different-types-lookup-cache
Savia Lobo
20 Nov 2017
6 min read
Save for later

Looking at the different types of Lookup cache

Savia Lobo
20 Nov 2017
6 min read
[box type="note" align="" class="" width=""]The following is an excerpt from a book by Rahul Malewar titled Learning Informatica PowerCenter 10.x. We walk through the various types of lookup cache based on how a cache is defined in this article.[/box] Cache is the temporary memory that is created when you execute a process. It is created automatically when a process starts and is deleted automatically once the process is complete. The amount of cache memory is decided based on the property you define at the transformation level or session level. You usually set the property as default, so as required, it can increase the size of the cache. If the size required for caching the data is more than the cache size defined, the process fails with the overflow error. There are different types of caches available. Building the Lookup Cache - Sequential or Concurrent You can define the session property to create the cache either sequentially or concurrently. Sequential cache When you select to create the cache sequentially, Integration Service caches the data in a row-wise manner as the records enter the lookup transformation. When the first record enters the lookup transformation, lookup cache gets created and stores the matching record from the lookup table or file in the cache. This way, the cache stores only the matching data. It helps in saving the cache space by not storing unnecessary data. Concurrent cache When you select to create cache concurrently, Integration service does not wait for the data to flow from the source; it first caches complete data. Once the caching is complete, it allows the data to flow from the source. When you select a concurrent cache, the performance enhances as compared to sequential cache since the scanning happens internally using the data stored in the cache. Persistent cache - the permanent one You can configure the cache to permanently save the data. By default, the cache is created as non-persistent, that is, the cache will be deleted once the session run is complete. If the lookup table or file does not change across the session runs, you can use the existing persistent cache. Suppose you have a process that is scheduled to run every day and you are using lookup transformation to lookup on the reference table that which is not supposed to change for six months. When you use non-persistent cache every day, the same data will be stored in the cache; this will waste time and space every day. If you select to create a persistent cache, the integration service makes the cache permanent in the form of a file in the $PMCacheDir location. So, you save the time every day, creating and deleting the cache memory. When the data in the lookup table changes, you need to rebuild the cache. You can define the condition in the session task to rebuild the cache by overwriting the existing cache. To rebuild the cache, you need to check the rebuild option on the session property. Sharing the cache - named or unnamed You can enhance the performance and save the cache memory by sharing the cache if there are multiple lookup transformations used in a mapping. If you have the same structure for both the lookup transformations, sharing the cache will help in enhancing the performance by creating the cache only once. This way, we avoid creating the cache multiple times, which in turn, enhances the performance. You can share the cache--either named or unnamed Sharing unnamed cache If you have multiple lookup transformations used in a single mapping, you can share the unnamed cache. Since the lookup transformations are present in the same mapping, naming the cache is not mandatory. Integration service creates the cache while processing the first record in first lookup transformation and shares the cache with other lookups in the mapping. Sharing named cache You can share the named cache with multiple lookup transformations in the same mapping or in another mapping. Since the cache is named, you can assign the same cache using the name in the other mapping. When you process the first mapping with lookup transformation, it saves the cache in the defined cache directory and with a defined cache file name. When you process the second mapping, it searches for the same location and cache file and uses the data. If the Integration service does not find the mentioned cache file, it creates the new cache. If you run multiple sessions simultaneously that use the same cache file, Integration service processes both the sessions successfully only if the lookup transformation is configured for read-only from the cache. If there is a scenario when both lookup transformations are trying to update the cache file or a scenario where one lookup is trying to read the cache file and other is trying to update the cache, the session will fail as there is conflict in the processing. Sharing the cache helps in enhancing the performance by utilizing the cache created. This way we save the processing time and repository space by not storing the same data multiple times for lookup transformations. Modifying cache - static or dynamic When you create a cache, you can configure them to be static or dynamic. Static cache A cache is said to be static if it does not change with the changes happening in the lookup table. The static cache is not synchronized with the lookup table. By default, Integration service creates a static cache. The Lookup cache is created as soon as the first record enters the lookup transformation. Integration service does not update the cache while it is processing the data. Dynamic cache A cache is said to be dynamic if it changes with the changes happening in the lookup table. The static cache is synchronized with the lookup table. You can choose from the lookup transformation properties to make the cache dynamic. Lookup cache is created as soon as the first record enters the lookup transformation. Integration service keeps on updating the cache while it is processing the data. The Integration service marks the record as an insert for the new row inserted in the dynamic cache. For the record that is updated, it marks the record as an update in the cache. For every record that doesn't change, the Integration service marks it as unchanged. You use the dynamic cache while you process the slowly changing dimension tables. For every record inserted in the target, the record will be inserted in the cache. For every record updated in the target, the record will be updated in the cache. A similar process happens for the deleted and rejected records.
Read more
  • 0
  • 0
  • 13276
article-image-so-you-want-be-devops-engineer
Darrell Pratt
20 Oct 2016
5 min read
Save for later

So you want to be a DevOps engineer

Darrell Pratt
20 Oct 2016
5 min read
The DevOps movement has come about to accomplish the long sought-after goal of removing the barriers between the traditional development and operations organizations. Historically, development teams have written code for an application and passed that code over to the operations team to both test and deploy onto the company’s servers. This practice generates many mistakes and misunderstandings in the software development lifecycle, in addition to the lack of ownership amongst developers that grows as a result of them not owning more of the deployment pipeline and production responsibilities. The new DevOps teams that are appearing now start as blended groups of developers, system administrators, and release engineers. The thought isthat the developers can assist the operations team members in the process of building and more deeply understanding the applications, and the operations team member can shed light on the environments and deployment processes that they must master to keep the applications running. As these teams evolve, we are seeing the trend to specifically hire people into the role of the DevOps Engineer. What this role is and what type of skills you might need to succeed as a DevOps engineer is what we will cover in this article. The Basics Almost every job description you are going to find for a DevOps engineer is going to require some level of proficiency in the desired production operating systems. Linux is probably the most common. You will need to have a very good level of understanding of how to administer and use a Linux-based machine. Words like grep, sed, awk, chmod, chown, ifconfig, netstat and others should not scare you. In the role of DevOps engineer, you are the go-to person for developers when they have issues with the server or cloud. Make sure that you have a good understanding of where the failure points can be in these systems and the commands that can be used to pinpoint the issues. Learn the package manager systems for the various distributions of Linux to better understand the underpinnings of how they work. From RPM and Yum to Apt and Apk, the managers vary widely but the common ideas are very similar in each. You should understand how to use the managers to script machine configurations and understand how the modern containers are built. Coding The type of language you need for a DevOps role is going to depend quite a bit on the particular company. Java, C#, JavaScript, Ruby and Python are all popular languages. If you are a devout Java follower then choosing a .NET shop might not be your best choice. Use your discretion here, but the job is going to require a working knowledge of coding in one more focused languages. At a minimum, you will need to understand how the build chain of the language works and should be comfortable understanding the error logging of the system and understand what those logs are telling you. Cloud Management Gone are the days of uploading a war file to a directory on the server. It’s very likely that you are going to be responsible for getting applications up and running on a cloud provider. Amazon Web Services is the gorilla in the space and having a good level of hands on experience with the various services that make up a standard AWS deployment is a much sought after skill set. From standard AMIs to load balancing, cloud formation and security groups, AWS can be complicated but luckily it is very inexpensive to experiment and there are many training classes of the different components. Source Code Control Git is the tool of choice currently for source code control. Git gives a team a decentralized SCM system that is built to handle branching and merging operations with ease. Workflows that teams use are varied, but a good understanding of how to merge branches, rebase and fix commit issues is required in the role. The DevOps engineers are usually looked to for help on addressing “interesting” Git issues, so good, hands-on experience is vital. Automation Tooling A new automation tool has probably been released in the time it takes to read this article. There will be new tools and platforms in this part of the DevOps space, but the most common are Chef, Puppet and Ansible. Each system provides a framework for treating the setup and maintenance of your infrastructure as code. Each system has a slightly different take on the method for writing the configurations and deploying them, but the concepts are similar and a good background in any one of these is more often than not a requirement for any DevOps role. Each of these systems requires a good understanding of either Ruby or Python and these languages appear quite a bit in the various tools used in the DevOps space. A desire to improve systems and processes While not an exhaustive list, mastering this set of skills will accelerate anyone’s journey towards becoming a DevOps engineer. If you can augment these skills with a strong desire to improve upon the systems and processes that are used in the development lifecycle, you will be an excellent DevOps engineer. About the author Darrell Pratt is the director of software development and delivery at Cars.com, where he is responsible for a wide range of technologies that drive the Cars.com website and mobile applications. He is passionate about technology and still finds time to write a bit of code and hack on hardware projects. You can find him on Twitter here: @darrellpratt.
Read more
  • 0
  • 0
  • 13068

article-image-how-are-container-technologies-changing-programming-languages
Xavier Bruhiere
11 Apr 2017
7 min read
Save for later

How are container technologies changing programming languages?

Xavier Bruhiere
11 Apr 2017
7 min read
In March 2013, Solomon Hykes presented Docker, which democratized access to Linux containers. The underlying technology, control groups, was already incubating for a few years at Google. However, Docker abstracts away the complexity of containers' lifecycle and adoption skyrocketed among developers. In June 2016, Datadog published some compelling statistics about Docker adoption: the industry as a whole increasingly adopted containers for production. Since everybody is talking about how to containarize everything, I would like to take a step back and study how it is influencing the development of our most fundamental medium: programming languages. The rise of Golang, the Java8 release, Python 3.6 improvements--how do language development and containerization marketsplay together in 2017? Scope of Container Technologies Let's define the scope of what we call container technologies. Way back in 2006, two Google engineers started to work on a new technology for the partition hierarchical group of tasks. They called it cgroups and submitted the code to the Linux Kernel. This lightweight approach of virtualization (sorry Mike) was an opportunity for infrastructure-heavy companies and Heroku and Google, among others, took advantage of it to orchestrate so-called containers. Put simply, they were now able to think of application deployment as the dynamic manipulation of theses determinist runtimes. Whatever the code or the business logic, it was encapsulated into a uniform execution format. Cgroups are very low level though, and tooling around the original primitives quickly emerged, like LXC backed by Canonical. Then, Solomon Hykes came in and made the technology widely accessible with Docker. The possibilities were endless and, indeed, developers and startups alike rushed in all directions. Lately, however, the hype seems to have cooled down. Docker market share is being questioned while the company sorts its business strategy. At the end of the day, developers forgetabout vendors/technology and just want simple tooling for more efficient coding. Docker-compose, Red Hat Container Development Kit, GC Container Builder, or local Kubernetes are very sophisticated pieces of technologies that hide the details of the underlying container mechanics. What they give to engineers are powerful primitives for advanced development practices: development/production environment parity, transparent services replication, and predictable runtime configuration. However,this is not just about development correctness or convenience, considering how containers are eating the IaaS landscape. It is also about deployment optimizations and resilience. Tech giants who operate crazy large infrastructures developed incredible frameworks, often in the open, to push how fast they could deploy auto-scalable, self-healing, zero-downtime fleets. Apache Mesos backed by Microsoft, or Kubernetes by Google, make at least two promises: Safe and agile deployments at the (micro-)service level Reliable orchestration with elegant service discovery, load-balancing, and failure management (because you have to accept that production always goes wrong at some point) Containers enabled us to manage complexity with infrastructure design patterns like micro-services or serverless. Behind the hype of these buzzwords, engineers try to improve team collaboration, safe and agile deployments, large project maintenance, and monitoring. However,we quickly came to realize it was sold with a DevOps tax. Fortunately, the software industry has a hard-won experience of such balance, and we start to see it converging toward the most robust approaches. This container landscape overview hopefully provides the requirements to now study how it has impacted the development of programming languages. We will take a look first at their ecosystems, and then we will dive into language designs themselves. Language Ecosystems and Usages Most developers are now aware of how invasive container technologies can be. It makes its way into your development toolbox or how your company manages its servers. Some will argue that the fundamentals of coding did not evolve much, but the trend is hard to ignore anyway. While we are free, of course, to stay away from Silicon Valley’s latest fashions, I think containers tackle a problem most languages struggle with: dependencies and packaging. Go, for example, got packaging right, but it’s still trying to figure out how to handle dependencies versioning and vendoring. JavaScript, on the other hand, has npm to manage fine-grained third-party code, but build tools are scattered all over Github. Containers won't spare you the pain of setting things up (they target runtimes, not build steps), but it can lower the bar of language adoption. Official images can run most standard language projects and one can both give a try and deploy a basic hello world in no time. When you realize that Go1.5+ needs Go1.4 to be compiled, it can be a relief to just docker run your five-lines-long main.go. Growing a language community is a sure way to develop its tooling and libraries, but containers also influence how we design those components. They are the cloud counterparts of the current functional trend. We tend to embrace a world where both functions and servers are immutable and single-purpose. We want predictable, pure primitives (in the mathematical sense). All of that to match increasingly distributed and intensive workloads. I hope those approaches come from a product’s need but, obviously, having the right technology at hand drives the innovation. As software engineers in 2017, we also design libraries and tools with containers in mind: high performance networking, distributed process management, Data pipelines, and so on. Language Design What about languages? To get things straight, I don't think containers influence how Guido Van Rossum designs Python. And that is the point of containers. They abstract the runtime to let you focus on your code™  (it is literally on every Docker-based PaaS landing page). You should be able to design whatever logic implementation you need, and containers will come in handy to help you run it when needed. I do believe, however, that both languages last evolutions and the rise of containers serve the same maturation of ideas in the tech community. Correctness at compile time: Both Python 3.6, ELM, and JavaScript ES7 are bringing back typing to their language (see type hints or Typescripts). An application running locally will launch just the same in production. You can even run tests against multiple runtimes without complex scripts or heavy setup. Simplicity: Go won a lot of its market share thanks to its initial simplicity, taking a lot of decisions for you. Containers try their best to offer one unified way to run code, whatever the stack. Functional: Scala, JavaScript, and Elixir, all enforce immutable states, function compositions with support for lambda expressions, and function purity. It echoes the serverless trend that promotes function as a service. Most of the providers leverage some kind of container technology to bring the required agility to their platforms. There is something elegant about having language features, programmatical design patterns, and infrastructure operations going hand in hand. While I don't think one of them influences the other, I certainly believe that their development smoothen other’s innovations. Conclusion Container technologies and the fame around them are finally starting to converge toward fewer and more robust usages. At the same time, infrastructure designs, new languages, and evolutions of existing ones seem to promote the same underlying patterns: simple, functional, decoupled components. I think this coincidence comes from industry maturity and openness, more than, as I said, one technology influencing the other. Containers, however, are shaking how we collaborate and design tools for the languages we love. It changes the way we on-board developers learning a new language. It changes how we setup local development environments with micro-replicates of production topology. It changes the way we package and deploy code. And, most importantly, it enables architectures like micro-services or lambdas that influence how we design our programs. In my opinion, programming language design should continue to evolve decoupled from containers. They serve different purposes, and given the pace of the tech industry, major languages should never depend on new shining tools. That being said, the evolution of languages now comes with the activity of its community—what they build, how they use it, and how they spread it in companies. Coping with containers is an opportunity to bring new developers, improve production robustness, and accelerate both technical and human growth. About the author Xavier Bruhiere is a lead developer at AppTurbo in Paris, where he develops innovative prototypes to support company growth. He is addicted to learning, hacking on intriguing hot techs (both soft and hard), and practicing high-intensity sports.
Read more
  • 0
  • 0
  • 12997
Modal Close icon
Modal Close icon