Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-what-does-infrastructure-code-actually-mean
Raka Mahesa
12 Apr 2017
5 min read
Save for later

What does 'Infrastructure as Code' actually mean?

Raka Mahesa
12 Apr 2017
5 min read
15 years ago, adding an additional server to a project's infrastructure was a process that could take days, if not weeks. Nowadays, thanks to cloud technology, you can get a new server ready for your project in just a few seconds with a couple of clicks.  New and better technology doesn't mean it doesn't have its own set of problems though. Because it's very easy to add servers to your project now, your capability to manage your project infrastructure usually doesn't grow as fast as the size of your infrastructure. This leads to a lot of problems in the backend, such as inconsistent server configurations or configurations that can't be replicated. It's a common problem among massive web projects, so various approaches to tackle that problem are being devised. One such approach is known as 'Infrastructure as Code.' (image from http://cdn2.itpro.co.uk/sites/itpro/files/server_room.jpg)  Before we go on talking about Infrastructure as Code, let's first make sure that we understand the basics, which is infrastructure configuration and automation. Before an infrastructure (or a server) can be used to run a web application, it first has to be configured so that it has all of the requirements needed to run that application. This configuration ranges from very basic, such as operating systems and database types, to user accounts and software runtimes. And when dealing with virtual machines, configuration can even include the amount of RAM, storage space, and processing power a server would have.  All of those configurations are usually done by typing in the required commands on a terminal connected to the infrastructure. Of course, you can do it all manually, typing commands to install the needed software one by one, but what if you have to do that to tens, if not hundreds, of servers? That's where infrastructure automation comes in. By saving all of the needed commands to a script file, we can easily repeat this process to other servers that need to be configured by simply running that script.  All right, now that we have the basics behind us, let's move on. What does Infrastructure as Code really mean?  Infrastructure as Code, also known as Programmable Infrastructure, is a process for managing, computing, and networking infrastructure using software development methodologies. These methodologies include version control, testing, continuous integration, and other practices. It's an approach for handling servers by treating infrastructure as if it were code, hence the name.  But wait, because infrastructure automation uses script files for configuring servers, isn't it the same as treating infrastructure as code? Does it mean that Infrastructure as Code is just a cool term for infrastructure automation? Or are they actually different things?  Well, infrastructure automation is indeed one part of the Infrastructure as Code process, but it's the other part—the software development practices part—that differentiates the two of them. By employing software project methodologies, Infrastructure as Code can ensure that the automation will work reliably and consistently on every part of your infrastructure.  For example, by using version control systems on the server configuration script, any changes made to the file will be tracked, so when a problem arises in the server, we can find out exactly which changes caused that problem. Another software development practice that can be applied on infrastructure automation is automated testing. Having this practice would make it safer for developers to add changes to the script because any error added to the project can be detected quickly. All of these practices help ensure that the script configuration files are correct and reliable, which in turn ensures a robust and consistent infrastructure. (image from https://assets.pcmag.com/media/images/417346-back-up-your-cloud-how-to-download-all-your-data.jpg?thumb=y)  There's also one more thing to consider. Do not confuse Infrastructure as Code (IaC) with Infrastructure as a Service (IaaS). Infrastructure as a Service is a cloud computing service that provides infrastructure to developers and helps them manage it. This service allows developers to easily monitor and configure resources in their infrastructure. Examples for these types of cloud services are Amazon Web Services, Microsoft Azure, and the Google Compute Engine.  So, if both Infrastructure as Code and Infrastructure as a Service help developers manage their infrastructure, how do they exactly differ? Well, to put it in simple terms, IaaS is a tool (hammer) that gives developers a way to quickly configure their infrastructure, while Infrastructure as Code is a method to utilize such tools (carpentry). Just like how you can do carpentry without a hammer, you're not restricted to using IaaS if you want to run Infrastructure as Code practices on your infrastructure.  That said, one of the big requirements of being able to run Infrastructure as Code practices is to run the project on a dynamic infrastructure system. That is, a platform where you can programmatically create, destroy, and manage infrastructure resources on demand. While you can implement this system on your own private infrastructure, most of the IaaS available on the market already has this capability, making itthe perfect platform to run the Infrastructure as Code process.  That's the gist of the Infrastructure as Code approach. There are plenty of tools out there that enable you to apply Infrastructure as Code, including Ansible, Puppet, and Chef. Go check them out if you want to try this methodology for yourself.  About the author Raka Mahesa is a game developer at Chocoarts, http://chocoarts.com/, who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets at @legacy99. 
Read more
  • 0
  • 0
  • 3796

article-image-fat-2018-conference-session-1-summary-online-discrimination-and-privacy
Aarthi Kumaraswamy
26 Feb 2018
5 min read
Save for later

FAT* 2018 Conference Session 1 Summary: Online Discrimination and Privacy

Aarthi Kumaraswamy
26 Feb 2018
5 min read
The FAT* 2018 Conference on Fairness, Accountability, and Transparency is a first-of-its-kind international and interdisciplinary peer-reviewed conference that seeks to publish and present work examining the fairness, accountability, and transparency of algorithmic systems. This article covers research papers dedicated to 1st Session on Online discrimination and Privacy. FAT*  hosted the presentation of research work from a wide variety of disciplines, including computer science, statistics, the social sciences, and law. It took place on February 23 and 24, 2018, at the New York University Law School, in cooperation with its Technology Law and Policy Clinic. The conference brought together over 450 attendees from academic researchers, policymakers, and Machine learning practitioners. It witnessed 17 research papers, 6 tutorials, and 2 keynote presentations from leading experts in the field.  Session 1 explored ways in which online discrimination can happen and privacy could be compromised. The papers presented look for novel and practical solutions to some of the problems identified. We attempt to introduce our readers to the papers that will be presented at FAT* 2018 in this area thereby summarising the key challenges and questions explored by leading minds on the topic and their proposed potential answers to those issues. Session Chair: Joshua Kroll (University of California, Berkeley) Paper 1: Potential for Discrimination in Online Targeted Advertising Problems identified in the paper: Much recent work has focused on detecting instances of discrimination in online services ranging from discriminatory pricing on e-commerce and travel sites like Staples (Mikians et al., 2012) and Hotels.com (Hannák et al., 2014) to discriminatory prioritization of service requests and offerings from certain users over others in crowdsourcing and social networking sites like TaskRabbit (Hannák et al., 2017). In this paper, we focus on the potential for discrimination in online advertising, which underpins much of the Internet’s economy. Specifically, we focus on targeted advertising, where ads are shown only to a subset of users that have attributes (features) selected by the advertiser. Key Takeaways: A malicious advertiser can create highly discriminatory ads without using sensitive attributes such as gender or race. The current methods used to counter the problem are insufficient. The potential for discrimination in targeted advertising arises from the ability of an advertiser to use the extensive personal (demographic, behavioral, and interests) data that ad platforms gather about their users to target their ads. Different targeting methods offered by Facebook: attribute-based targeting, PII-based (custom audience) targeting and Look-alike audience targeting Three basic approaches to quantifying discrimination and their tradeoffs: Based on advertiser’s intent Based on ad targeting process Based on the targeted audience (outcomes) Paper 2: Discrimination in Online Personalization: A Multidisciplinary Inquiry The authors explore ways in which discrimination may arise in the targeting of job-related advertising, noting the potential for multiple parties to contribute to its occurrence. They then examine the statutes and case law interpreting the prohibition on advertisements that indicate a preference based on protected class and consider its application to online advertising. This paper provides a legal analysis of a real case, which found that simulated users selecting a gender in Google’s Ad Settings produces employment-related advertisements differing rates along gender lines despite identical web browsing patterns. Key Takeaways: The authors’ analysis of existing case law concludes that Section 230 may not immunize advertising platforms from liability under the FHA for algorithmic targeting of advertisements that indicate a preference for or against a protected class. Possible causes of ad targeting: Targeting was fully a product of the advertiser selecting gender segmentation. Targeting was fully a product of machine learning—Google alone selects gender. Targeting was fully a product of the advertiser selecting keywords. Targeting was fully the product of the advertiser being outbid for women. Given the limited scope of Title VII the authors conclude that Google is unlikely to face liability on the facts presented by Datta et al. Thus, the advertising prohibition of Title VII, like the prohibitions on discriminatory employment practices, is ill-equipped to advance the aims of equal treatment in a world where algorithms play an increasing role in decision making. Paper 3: Privacy for All: Ensuring Fair and Equitable Privacy Protections In this position paper, the authors argue for applying recent research on ensuring sociotechnical systems are fair and non-discriminatory to the privacy protections those systems may provide. Just as algorithmic decision-making systems may have discriminatory outcomes even without explicit or deliberate discrimination, so also privacy regimes may disproportionately fail to protect vulnerable members of their target population, resulting in disparate impact with respect to the effectiveness of privacy protections. Key Takeaways: Research questions posed: Are technical or non-technical privacy protection schemes fair? When and how do privacy protection technologies or policies improve or impede the fairness of systems they affect? When and how do fairness-enhancing technologies or policies enhance or reduce the privacy protections of the people involved? Data linking can lead to deanonymization; live recommenders can also be attacked to leak information The authors propose a new definition for a fair privacy scheme: a privacy scheme is (group-)fair if the probability of failure and expected risk are statistically independent of the subject’s membership in a protected class.   If you have missed Session 2, Session 3, Session 4 and Session 5 of the FAT* 2018 Conference, we have got you covered.
Read more
  • 0
  • 0
  • 3794

article-image-my-experience-keystonejs
Jake Stockwin
16 Sep 2016
5 min read
Save for later

My Experience with KeystoneJS

Jake Stockwin
16 Sep 2016
5 min read
Why KeystoneJS? Learning a new language can be a daunting task for web developers, but there comes a time when you have to bite the bullet and do it. I'm a novice programmer, with my experience mostly limited to a couple of PHP websites, using mySQL. I had a new project on the table and decided I wanted to learn Node. Any self-respecting Node beginner has written the very basic "Hello World" node server: var http = require('http'); var server = http.createServer(function(req, res) { res.writeHead(200, {"Content-Type": "text/plain"}); res.end("Hello World"); }); server.listen(80); Run node server.js and open localhost:80 in your web browser, and there it is. Great!It works, so maybe this isn't going to be so painful after all. Time to start writing the website! I quickly figure out that there is quite a jump between outputting "Hello World" and writing a fully functioning website. Some more research points me to the express package, which I install and learn how to use. However, eventually I have quite the list of packages to install, and all of these need configuring in the correct way to interact with each other. At this stage, everything is starting to get a little too complicated, and my small project seems like it's going to take lots of hours' work to get to the final website. Maybe I should just write it in PHP, since I at least know how to use it. Luckily, I was pointed toward KeystoneJS. I'm not going to explain how KeystoneJS works in this post, but by simply running yo keystone, my site was up and running. Keystone had configured all of those annoying modules for me, and I could concentrate on writing the code for my web pages. Adding new content types became as simple as adding a new Keystone "model" to the site, and then Keystone would automatically create all the database schemas for me and add this model to the admin UI. I was so impressed, and I had finished the whole website in just over an hour. KeystoneJS had definitely done 75% of the work for me, and I was incredibly pleased and impressed. I picked up Keystone so quickly, and I have used it for multiple projects since. It is without a doubt my go-to tool if I'm writing a website which has any kind of content management needs. Open Source Software and the KeystoneJS Community KeystoneJS is a completely open source project. You can see the source code on GitHub, and there is an active community of developers constantly improving and fixing bugs in KeystoneJS. It was developed by ThinkMill, a web design company. They use the software for their own work, so it benefits them to have a community helping to improve their software. Anyone can use KeystoneJS, and there is no need to give anything back, but a lot of people who find KeystoneJS really useful will want to help out. It also means if I discover a bug, I am able to submit a pull request to fix it, and hopefully that will get merged into the code. A few weeks ago, I found myself with some spare time and decided to get involved in the project, so I started to help out by adding some end-to-end (e2e) testing. Initially, the work I did was incorrect, but rather than my pull request just being rejected, the developers took the time to point me in the right direction. Eventually I worked out how everything worked, and my pull request was merged into the code. A few days later on, I had written a few more tests. I'd quite often need to ask questions on how things should be done, but the developers were all very friendly and helpful. Soon enough, I understood quite a bit about the testing and managed to add some more tests. It was not long before the project lead, Jed Watson, asked me if I would like to be a KeystoneJS member, which would give me access to push my changes straight into the code without having to make pull requests. For me, as a complete beginner, being able to say I was part of a project as big as this meant a lot. For me, to begin with, I felt as though I was asking so many questions that I must just be annoying everyone and should probably stop. However, Jed andeveryone else quickly changed that, and I felt like I was doing something useful. Into the future The entire team is very motivated to make KeystoneJS as good as it can be. Once version 0.4 is released, there will bemany exciting additions in the pipeline. The admin UI is going to be made more customizable, and user permissions and roles will be implemented, among many other things. All of this is made possible by the community, who dedicate lots of their time for free to make this work. The fact that everyone is contributing because they want to and not because it's what they're paid to do makes a huge difference. People want to see these features added so that they can use them for their own projects, and so they are all very committed to making it happen. On a personal note, I can't thank the community enough for all their help and support over the last few weeks, and am very much looking forward to being part of Keystone's development. About the author Jake Stockwin is a third-year mathematics and statistics undergraduate at the University of Oxford, and a novice full-stack developer. He has a keen interest in programming, both in his academic studies and in his spare time. Next year, he plans to write his dissertation on reinforcement learning, an area of machine learning. Over the past few months, he has designed websites for various clients and has begun developing in Node.js.
Read more
  • 0
  • 0
  • 3789

article-image-common-kafka-addons
Timothy Chen
11 Nov 2014
5 min read
Save for later

Common Kafka Addons

Timothy Chen
11 Nov 2014
5 min read
Apache Kafka is one of the most popular choices in choosing a durable and high-throughput messaging system. Kafka's protocol doesn't conform to any queue agnostic standard protocol (that is, AMQP), and provides concepts and semantics that are similar, but still different, from other queuing systems. In this post I will cover some common Kafka tools and add-ons that you should consider employing when using Kafka as part of your system design. Data mirroring Most large-scale production systems deploy their systems to multiple data centers (or A availability zones / regions in the cloud) to either avoid a SPOF (Single Point of Failure) when the whole data center is brought down, or reduce latency by serving systems closer to customers at different geo-locations. Having all Kafka clients all reading across data centers to access data as needed is quite expensive in terms of network latency, and it affects service performance. For Kafka to have the best performance in throughput and latency, all services should ideally communicate to a Kafka cluster within the same data center. Therefore, the Kafka team built a tool called MirrorMaker that is also employed in production at Linkedin. MirrorMaker itself is an installed daemon that sets up a configured number of replication streams from the destination cluster pulling from the source cluster, and is able to recover from failures and records its state in Zookeeper. With MirrorMaker you can set up Kafka clients that can read/write from clusters in the same DC. This aggregation from other brokers is replicated asynchronously and the local changes are polled from other clusters as well. Auditing Kafka is often served as a pub/sub queue between a frontend collecting service and a number of downstream services that includes batching frameworks, logging services, or event processing systems. Kafka works really well with various downstream services because it holds no state of each client (which is impossible for AMQP). Kafka also allows each consumer to consume data at different offsets of the same partition with high performance. Also, typically systems not only have one cluster of Kafka, but multiple Kafka clusters. These clusters act as a pipeline where a consumer of one Kafka cluster feeds into a recommendation system that writes that output into another set of Kafka clusters. One common need for a data pipeline is to have logging/auditing, to ensure that all of the data you produce from the source is reliably delivered into each stage. If this data is not delivered, then you will know the percentage of data that is missing. Kafka out-of-the-box doesn't provide this functionality, but it can be added using Kafka directly. One implementation is to give each stage of your pipeline an ID, and in the producer code at each stage write out the sum of the number of records in a configurable window that is pushed into Kafka along with the stage ID into a specific topic (that is, counts) at each stage of the pipeline. For example, with a Kafka pipeline that consists of stage A -> B -> C, you could imagine simple code such as the following to write out counts at a configured window: producer.send(topic, messages); sum += messages.count(); lastUpdatedAt = System.currentTimeMillis(); if (lastUpdatedAt - lastAudited >= WINDOW_MS) { lastAuditedAt = System.currentTimeMillis(); auditing.send("counts", new Message(new AuditMessage(stageId, sum, lastAuditedAt).toBytes()); } At the very bottom of the pipeline the counts topic will have the aggregate of counts from each pipeline, and a custom consumer can pull in all of the count messages and partition by stage and compare the sums. The results at each window can also be graphed to show the number of messages that are flowing through the system. This is what is done at LinkedIn to audit their production pipeline, and has been suggested for a while to be incorporated into Kafka itself but that hasn't happened yet. Topic partition assignments Kafka is highly available, since it offers replication and allows users to define the number of acknowledgments and the broker assignment of each replicated data for each partition. By default, if no assignment is given, then it's randomly assigned. Random assignment might not be suitable, especially if you have more requirements of how you want to place these data replicas. For example, if you are hosting your data on the cloud and want to withstand an availability zone failure, then placing more than one AZ for your data replication would be a good idea. Another example would be rack awareness in your data center. You can definitely build an extra tool that generates a specific replica assignment based on all of this information. Conclusion The Kafka tools described in this post are some common tools and features companies in the community often employ, but depending upon your system there might be other needs to consider. The best way to see if someone has implemented a similar feature that is open source is to email the mailing list or ask on IRC (freenode #kafka). About The Author Timothy Chen is a distributed systems engineer at Mesosphere Inc., The Apache Software Foundation. His interests include: open source technologies, big data, and large-scale distributed systems. He can be found on Github as tnachen.
Read more
  • 0
  • 0
  • 3773

article-image-taking-advantage-spritekit-cocoa-touch-0
Milton Moura
04 Jan 2016
7 min read
Save for later

Taking advantage of SpriteKit in Cocoa Touch

Milton Moura
04 Jan 2016
7 min read
Since Apple announced SpriteKit at WWDC 2013, along with iOS 7, it has been promoted as a framework for building 2D games with high-performance graphics and engaging gameplay. But, as I will show you in this post, by taking advantage of some of it's features in your UIKit-based application, you'll be able to add some nice visual effects to your user interface without pulling too much muscle. We will use the latest stable Swift version, along with Xcode 7.1 for our code examples. All the code in this post can be found in this github repository. SpriteKit's infrastructure SpriteKit provides an API for manipulating textured images (sprites), including animations, applying image filters, with optional physics simulation and sound playback. Although Cocoa Touch also provides other frameworks for these things, like Core Animation, UIDynamics and AV Foundation, SpriteKit is especially optimized for doing these operations in batch and performs them on a lower lever, transforming all graphics operations directly into OpenGL commands. The top-level user interface object for SpriteKit are SKView's, that can be added to any application view controller, and then are used to present scene objects, of type SKScene, composed of possibly multiple nodes with content, that will render seamlessly with other layers or views that might also be contained in the application's current view hierarchy. This allows us to add smooth and optimized graphical effects to our application UI, enriching the user experience and keeping our refresh rate at 60hz. Our sample project To show how to combine typical UIKit controls with SpriteKit, we'll build a sample login screen, composed of UITextFields, UIButtons and UILabels, for our wonderful new WINTER APP. But instead of a boring, static background, we'll add an animated particle effect to simulate falling snow and apply a Core Image vignette filter to mask them under a niffy spotlight-type effect. 1. Creating the view hierarchy We'll start with a brand new Swift Xcode project, selecting the iOS > Single View Application template and opening the Main Storyboard. In the existing View Controller Scene, we add a new UIView that anchors to it's parent view's sides, top and bottom and change it's class from the default UIView to SKView. Also make sure the background color for this view is dark, so that the particles that we'll add later have a nice contrast. Now, we'll add a few UITextFields, UILabels and UIButtons to replicate the following login screen. Also, we need an IBOutlet to our SKView. Let's call it sceneView. This is the SpriteKit view where we will add the SKScene with the particle and image filter effect. 2. Adding a Core Image filter We're done with UIKit for now. We currently have a fully (well, not really) functional login screen and it's now time to make it more dynamic. The first thing we need is a scene, so we'll add a new Swift class called ParticleScene. In order to use SpriteKit's objects, let's not forget to add an import statement for that and declare that our class is an SKScene.    import SpriteKit    class ParticleScene : SKScene    {        ...    } The way we initialize a scene in SpriteKit is by overriding the didMoveToView(_:) method, which is called when a scene is added to an SKView. So let's do that and setup the Core Image filter. If you are not familiar with Core Image, it is a powerful image processing framework that provides over 90 filters that can be applied in real time to images, videos and, coincidentally, to SpriteKit nodes, of type SKNode. An SKNode is the basic unit of content in SpriteKit and our SKScene is one big node for rendering. Actually, SKScene is an SKEffectNode, which is a special type of node that allows its content to be post processed using Core Image filters. In the following snippet, we add a CIVignetteEffect filter centered on our scene and with a radius equal to the width of our view frame:    override func didMoveToView(view: SKView) {        scaleMode = .ResizeFill               // initialize the Core Image filter        if let filter = CIFilter(name: "CIVignetteEffect") {            // set the default input parameter values                filter.setDefaults()            // make the vignette center be the center of the view               filter.setValue(CIVector(CGPoint: view.center), forKey: "inputCenter")            // set the radius to be equal to the view width                filter.setValue(view.frame.size.width, forKey: "inputRadius")            // apply the filter to the current scene                self.filter = filter                self.shouldEnableEffects = true            }            presentingView = view        } If you run the application as is, you'll notice a nice spotlight effect behind our login form. But we're not done yet. 3. Adding a particle system Since this is a WINTER APP, let's add some falling snow flakes in the background. Add a new SpriteKit Particle File to the project and select the Snow template. Next, we add a method to setup our particle node emitter, an SKEmitterNode, that hides all the complexity of a particle system:    func startEmission() {        // load the snow template from the app bundle        emitter = SKEmitterNode(fileNamed: "Snow.sks")        // emit particles from the top of the view        emitter.particlePositionRange = CGVectorMake(presentingView.bounds.size.width, 0)        emitter.position = CGPointMake(presentingView.center.x, presentingView.bounds.size.height)        emitter.targetNode = self        // add the emitter to the scene        addChild(emitter) } To finish things off, let's create a new property to hold our particle scene in the ViewController and start the particle in the viewDidAppear() method:      class ViewController: UIViewController {    ...    let emitterScene = ParticleScene()    ...    override func viewDidAppear(animated: Bool) {        super.viewDidAppear(animated)      emitterScene.startEmission()    } } And we're done! We now have a nice UIKit login form with an animated background that is much more compelling than a simple background color, gradient or texture. Where to go from here You can explore more Core Image filters to add stunning effects to your UI but be warned that some are not prepared for real-time, full-frame rendering. Indeed, SpriteKit is very powerful and you can even use OpenGL shaders in nodes and particles. You are welcome to checkout the source code for this article and you'll see that it has a little extra Core Motion trick, that shifts the direction of the falling snow according to the position of your device. About the author Milton Moura (@mgcm) is a freelance iOS developer based in Portugal. He has worked professionally in several industries, from aviation to telecommunications and energy and is now fully dedicated to creating amazing applications using Apple technologies. With a passion for design and user interaction, he is also very interested in new approaches to software development. You can find out more at http://defaultbreak.com
Read more
  • 0
  • 0
  • 3772

article-image-art-hack-day
Michael Ang
28 Nov 2014
6 min read
Save for later

Art Hack Day

Michael Ang
28 Nov 2014
6 min read
Art Hack Day is an event for hackers whose medium is art and artists whose medium is tech. A typical Art Hack Day event brings together 60 artist-hackers and hacker-artists to collaborate on new works in a hackathon-style sprint of 48 hours leading up to a public exhibition and party. The artworks often demonstrate the expressive power of new technology, radical collaboration in art or a critical look at how technology affects society. The technology used is typically open, and sharing source code online is encouraged. Hacking an old reel-to-reel player for Mixtape. Photo by Vinciane Verguethen. As a participant (and now an organizer) of Art Hack Day I’ve had the opportunity to participate in three of the events. The spirit of intense creation in a collaborative atmosphere drew me to the Art Hack Day. As an artist working with technology it’s often possible to get bogged down in the technical details of realizing a project. The 48-hour hackathon format of Art Hack Day gives a concrete deadline to spur the process of creation and is short enough to encourage experimentation. When the exhibition of a new work is only 48 hours away, you’ve got to be focused and solve problems quickly. Going through this experience with 60 other people brings an incredible energy. Each Art Hack Day is based around a theme. Some examples include "Lethal Software", "afterglow", and "Disnovate". The Lethal Software art hack took place in San Francisco at Gray Area. The theme was inspired by the development of weaponized drones, pop-culture references like Robocop and The Terminator, and software that fights other software (e.g. spam vs spam filters). Artist-hackers were invited to create projects engaging with the theme that could be experienced by the public in-person and online. Two videogame remix projects included KillKillKill!!! where your character would suffer remorse after killing the enemy and YODO Mario (You Only Die Once) where the game gets progressively glitched out each time Mario dies, and the second player gets to move the holes in the ground in an attempt to kill Mario. DroneML presented a dance performance using drones and Cake or Death? (the project I worked on) repurposed a commercial drone into a CupCake Drone that delivered delicious pastries instead of deadly missiles. A video game character shows remorse in KillKillKill!!! The afterglow Art Hack Day in Berlin as part of the transmediale festival posed a question relating to the ever increasing amount of e-waste and overabundance of collected data: "Can we make peace with our excessive data flows and their inevitable obsolescence? Can we find nourishment in waste, overflow and excess?" Many of the projects reused discarded technology as source material. PRISM: The Beacon Frame caused controversy when a technical contractor thought the project seemed closer to the NSA PRISM surveillance project than an artistic statement and disabled the project. The Art Hack Day version of PRISM gave a demonstration of how easily cellular phone connections can be hijacked - festival visitors coming near the piece would receive mysterious text messages such as "Welcome to your new NSA partner network". With the show just blocks away from the German parliament and recent revelations of NSA spying the piece seemed particularly relevant. A discarded printer remade into a video game for PrintCade Disnovate was hosted by Parsons Paris as part of the inauguration for their MFA Design and Technology program. Art Hack Day isn’t shy of examining the constant drive for innovation in technology, and even the hackathon format that it uses: "Hackathons have turned into rallies for smarter, cheaper and faster consumption. What role does the whimsical and useless play in this society? Can we evaluate creation without resorting to conceptions of value? What worldview is implied by the language of disruption; what does it clarify and what does it obscure?" Many of the works in this Art Hack Day had a political or dystopian statement to make. WAR ZONE recreated historical missile launches inside Google Earth, giving a missile’s-eye view of the trajectory from launch site to point of impact. The effect was both mesmerizing and terrifying. Terminator Studies draws connections between the fictional Terminator movie and real-world developments in the domination of machines and surveillance. Remelt literally recast technology into a primitive form by melting down aluminum computer parts and forming them into Bronze Age weapons, evoking the fragility of our technological systems and often warlike nature. On a more light-hearted note Drinks At The Opening Party presented a table of empty beer bottles. As people took pictures of the piece using a flash a light sensor would trigger powerful shaking of the table that would actually break the bottles. Trying to preserve an image of the bottles would physically destroy them. Edward Snowden gets a vacation in Paris as Snowmba. Photo by Luca Lomazzi. The speed with which many of these projects were created is testament to the abundance of technology that is available for creative use. Rather than using technology in pursuit of "faster, better, more productive" artist-hackers are looking at the social impacts of technology and its possibilities for expression and non-utilitarian beauty. The collaborative and open atmosphere of the Art Hack Day gives rise to experimentation and new combinations of ideas. Technology is one of the most powerful forces shaping global society. The consummate artist-hacker uses technology in a creative way for social good. Art Hack Day provides an environment for these artist-hackers and hacker-artists to collaborate and share their results with the public. You can browse through project documentation and look for upcoming Art Hacks on the Art Hack Day website or via @arthackday on Twitter. Project credits Mixtape by John Nichols, Jenn Kim, Khari Slaughter and Karla Durango KillKillKill!!! by bigsley and Martiny DroneML by Olof Mathé, Dean Hunt, and Patrick Ewing YODO Mario (You Only Die Once) by Tyler Freeman and Eric Van Cake or Death? by Michael Ang, Alaric Moore and Nicolas Weidinger PRISM: The Beacon Frame by Julian Oliver and Danja Vasiliev  PrintCade by Jonah Brucker-Cohen and Michael Ang WAR ZONE by Nicolas Maigret, Emmanuel Guy and Ivan Murit Terminator Studies by Jean-Baptiste Bayle Remelt by Dardex Drinks At The Opening Party by Eugena Ossi, Caitlin Pickall and Nadine Daouk Snowmba by Evan Roth About the Author Michael Ang is a Berlin-based artist and engineer working at the intersection of art, engineering, and the natural world. His latest project is the Polygon Construction Kit, a toolkit for bridging the virtual and physical realms by constructing real-world objects from simple 3D models. He is a participant and sometimes organizer of Art Hack Day, an event for hackers whose medium is tech and artists whose medium is technology.
Read more
  • 0
  • 0
  • 3757
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-what-mar-tech
Hari Vignesh
10 Oct 2017
6 min read
Save for later

What is Mar-Tech?

Hari Vignesh
10 Oct 2017
6 min read
Blending marketing and software worlds together  Marketing is rapidly becoming one of the most technology dependent departments within companies. It is now a key driver of IT purchasing, and this trend is only expected to grow.  Marketing has evolved drastically over the last few years. Today’s marketers own more customer data and touch points than ever before, and more than any other department. Marketing has become a tech powered force, and technical capabilities are slowly being ingrained into marketing DNA. This rapid switch has caused close relationships between marketing and IT departments. CMO’s have never been more likely to attend meetings alongside the CIO. Marketing is becoming a technology-driven discipline, where code and data become fundamental.  Nowadays, in the digital world, software is marketing’s eyes, ears, and hands. We can no longer afford ourselves to do something just because we think it may increase sales; we base our decisions on data and use powerful software to execute our marketing initiatives efficiently. Marketing software really helps us to simplify our day-to-day lives and save time on regular manual tasks, giving us an opportunity to focus on new campaigns and strategies. It helps us to avoid doing repetitive tasks instead allows us time for innovation, creativity, brainstorming, and putting some soul into our products and services. Due to marketing software, we can easily identify tactics that are driving new customers, converting leads, etc. And that means better ROI, and happier project managers.  New digital channels and devices, such as search engines, social media, mobile, etc. have complicated the journey of our customers. There is now such a vast level of information, it is no longer possible to manually sift through it all to separate what is essential data and what is not. We live in a very complicated and stressful world, but if we use marketing software to identify what information is really key for us, we can get 10 steps closer to our target. For example, it is no longer unfair to assume that by clicking on a t-shirt we like, we may well get recommendations based on our purchase history, preferences, and location. Could we have expected that even up to five years ago? Now, ease of purchase is a norm that is regularly implemented by marketing teams, because in the current climate, everything is done fast. We are in the era of short term gratification, folks, and if we are to meet the exceeding expectations of clients and customers, software will give us the time we need to not only meet demand, but also continue to innovate and grow for future challenges.  Defining Mar-Tech Marketing technologies provide the tools that enable marketers to… well… market. They automate difficult, time-consuming, and repetitive manual tasks to surface customer insight. Built by technologists, used by marketers. Marketing technology should aim to remove or significantly reduce the need for IT involvement. In short, it strives to keep marketing within marketing. Divisions of Mar-Tech Internal technology — what we use to manage and analyze marketing operations, such as SEO, competitive analysis, social media monitoring, etc. External technology — what we use to reach our target and deliver our content: websites, ads, landing pages, email campaigns, apps, etc. Product technology — what features we add to our products and services and how they impact marketing ecosystem. For example, social sharing features, location features with GPS. RFID and participation in the IoT or digital products with viral capabilities. Useful Mar-Techs Analytics Marketing is at an inflection point where the performance of channels, technologies, ads, offers — everything — are trackable like never before. Over a century ago, retail and advertising pioneer John Wanamaker said, “Half the money I spend on advertising is wasted, the trouble is I don’t know which half.” Today, smart marketers do know which half isn’t working. But to do that efficiently, you need to have web analytics programs set up, and have people on the marketing team who know how to use and interpret data. Conversion Optimization Conversion optimization is the practice of getting people who come to your website (or wherever you are engaging with them) to do what you want them to do as much as possible, and usually, that involves filling out a form so that at the very least you have their email address. Email Marketing Email marketing is the 800-pound gorilla of digital marketing. And I’m not talking about spamming people by buying lists that are being sold to your competitors as well. I’m talking about getting people to give you permission to email them additional information, and then sending only valuable content tailored to that person’s interests. Search Engine Marketing Search Engine Marketing includes both paid search ads, like Google AdWords, and search engine optimization (SEO) to try to get high organic search listings for your website content. Since most people, even B2B buyers of big ticket items, use search as part of their work, you need to be there when these people are searching for what you’re selling. Remarketing You’ve experienced remarketing. When you go to a website and then, when you leave that site, their ads appear on other sites that you visit. It’s really easy to set up and incredibly cost effective because you’re only advertising to people who have already expressed enough interest in you to come to your site. Mobile Friendly Half of all emails are now opened on smartphones, and soon half of search will be done on them too, so all websites need to be mobile friendly. But today, less than a third of them are. Simply put, you need to have a site that is easy to read and use on a phone. If you don’t, Google penalizes you with lower mobile search rankings. Marketing Automation Marketing automation brings it all together. It is a terrific technology that includes analytics, online forms, tracking people’s activity when they come to your website, personalizing website content, managing email campaigns, facilitating the alignment of sales and marketing through lead scoring and automated alerts to sales people, informing these activities with data from your CRM and third-party sources, and more. Forecast for the next few years in Mar-Tech Huge amounts of data about buyers, channels, and competitors can be available for CMOs and it gives endless opportunities. Companies that work in this field become unicorns in several months, not even years. If you compare the number of companies dedicated to this subject from a year ago to now, you will see that the number of them have more than doubled. The best of these companies use machine learning and data science to deliver market insights and capabilities. This is especially valuable for B2B companies, where lead times are longer and purchase decisions are more considered. Companies that are utilizing Mar-Tech are most likely to be here at the right time with the right service for the customer! About the Author  Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades. 
Read more
  • 0
  • 0
  • 3714

article-image-what-is-docker-and-why-is-it-so-popular
Julian Gindi
28 Aug 2015
6 min read
Save for later

What is Docker and Why is it So Popular?

Julian Gindi
28 Aug 2015
6 min read
Docker is going to revolutionize how we approach application development and deployment; trust me. Unfortunately, it's difficult to get started due to the fact that all the resources out there are either too technical, or are lacking in the intuition behind why Docker is amazing. I hope this post finds a nice balance between the two. What is Docker? Docker is a technology that wraps Linux's operating-system-level virtualization. This is an extremely verbose way of saying that Docker is a tool that quickly creates isolated environments for you to develop and deploy applications in. Another magical feature of Docker is that it allows you to run ANY linux-based operating system within the container - offering even greater flexibility. If you need to run one application with CentOS and another with Ubuntu - no problem! Containers and Images Docker uses the term Containers to represent an actively running environment that can run almost any piece of software; whether it be a web application or a service daemon. Docker Images are the "building blocks" that act as the starting point for our containers. Images become containers, and containers can be made into images. Images are typically base operating system images (such as Ubuntu), but they can also be highly-customized images containing the base OS plus any additional dependencies you install on it. Alright, enough talk. It's time to get our hands dirty. Installation You will need to be running either Linux or Mac to use Docker. Installation instructions can be found here. One thing to note: I am running Docker on Mac OS X. The only major difference between running Docker commands on a Linux machine versus a Mac machine, is that you have to sudo your commands on Linux. On Mac (like you will see below), you do not have to. There is a bit more set-up involved in running Docker on a Mac (you actually run commands through a Linux VM), but once set-up, they behave the same. Hello Docker! Let's start with a super basic Docker interaction and step through it piece-by-piece. $ docker run ubuntu:14.04 /bin/echo 'Hello Docker!' This command does a number of things: The docker run takes a base image and a command to run inside a new container created from that image. Here we are using the "ubuntu" base image and specifying a tag of "14.04". Tags are a way to use a specific version of an image. For example, we could have run the above command with ubuntu:12.04. Docker images are downloaded if they do not exist on your local machine already, so the first time you run the above command it will take a bit longer. Finally, we specify the command to run in the newly instantiated container. In this simple case, we are just echoing Hello Docker. If run successfully, you should see 'Hello Docker!' printed to your terminal. Congrats! You just created a fresh Docker container and ran a simple command inside...feel the power! An 'Interactive' Look at Docker Now let's get fancy and poke around inside of a container. Issue the following command, and you will be given a shell inside a new container. $ docker run -i -t ubuntu:14.04 /bin/bash The -i and -t flags are a way of telling Docker that you want to "attach" to this container and directly interact with it. Once inside the container, you can run any command you would normally run on a Ubuntu machine. Play around a bit and explore. Try installing software via apt-get, or running a simple shell-script. When you are done you can type exit to leave. One thing to note: containers are only active as long as they are running a service or command. In the case above, the minute we exited from our container, it stopped. I will show you how to keep containers running shortly. This does, however, highlight an important paradigm shift. Docker containers are really best used for small, discrete services. Each container should run one service or command and cease to exist when it has completed its task. A Daemonized Docker Service Now that you know how to run commands inside of a Docker container, let's create a small service that will run indefinitely in the background. This container will just print "Hello Docker" until you explicitly tell it to stop. docker run -d ubuntu:14.04 /bin/sh -c "while true; do echo Hello Docker; sleep 1; done" Let's step through this command: As you know by now, this command runs a small shell command inside of a container running ubuntu 14.04. The new addition is the -d flag, which tells Docker to run this container in the background as a daemon. To verify this container is actually running, issue the command docker ps, which gives you some helpful information about our running containers. $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 09e17400c7ee ubuntu:14.04 /bin/sh -c 'while tr 2 minutes ago Up 1 minute sleepy_pare You can view the output of your command by taking the NAME and issuing the following command: $ docker logs sleepy_pare Hello Docker Hello Docker Hello Docker To stop the container and remove it, you can run docker rm -f sleepy_pare. Congratulations! You have learned the basic concepts and usage of Docker. There is still a wealth of features to learn, but this should set a strong foundation for your own further exploration into the awesome world of Docker. Want more Docker? Our dedicated Docker page has got you covered. Featuring our latest titles and even more free content, make sure you check it out! From the 4th to the 10th of April save 50% on 20 of our very best cloud titles. From AWS to Azure. OpenStack and, of course, Docker, we're confident you'll find something useful. And if one's not enough, pick up any 5 featured titles for just $50! Find them here. About the Author Julian Gindi is a Washington DC-based software and infrastructure engineer. He currently serves as Lead Infrastructure Engineer at iStrategylabs where he does everything from system administration to designing and building deployment systems. He is most passionate about Operating System design and implementation, and in his free time contributes to the Linux Kernel.
Read more
  • 0
  • 0
  • 3681

article-image-10-predictions-tech-2025
Richard Gall
02 Nov 2015
4 min read
Save for later

10 Predictions for Tech in 2025

Richard Gall
02 Nov 2015
4 min read
Back to the Future Day last month got us thinking – what will the world look like in 2025? And what will technology look like? We’ve pulled together our thoughts into one listicle packed with predictions – please don’t hold us to them… Everything will be streamed – all TV will be streamed through the internet. Every new TV will be smart, which means applications will become a part of the furniture in our homes. Not only will you be able to watch just about anything you can imagine, you’ll also be able to play any game you want. The end of hardware – with streaming dominant, hardware will become less and less significant. You’ll simply need a couple of devices and you’ll be able to do just about anything you want. With graphene flooding the market, these devices will also be more efficient than anything we’re used to today – graphene batteries could make consumer tech last for weeks with a single charge. Everything is hardware – Hardware as we know it might be dead, but the Internet of Things will take over every single aspect of everyday life – essentially transforming everyday objects into hardware. From fridges to pavements, even the most quotidian artefacts will be connected to a large network. Everything will be in the cloud – our stream-only future means we’re going to be living in a world where the cloud reigns supreme. You can begin to see how everything fits together – from the Internet of Things to the decline in personal hardware, everything will become dependent on powerful and highly available distributed systems. Microservices will be the dominant form of cloud architecture – There’s a number of ways we could build distributed systems and harness cloud technology, but if 2015 is anything to go by, microservices are likely to become the most dominant way in which we deploy and manage applications in the cloud. This movement towards modular and independent units – or individual ‘services’ – will not simply be the agile option, but will be the obvious go-to choice for anyone managing applications in the cloud. Even in 2015, you would have to have a good reason to go back to the old, monolithic way of doing things… Apple and Google rule the digital world – Sure, this might not be much of a prediction given the present state of affairs, but it’s difficult to see how anyone can challenge the two global tech giants. Their dominance is likely to increase, not decline. This means every aspect of our interaction with software – as consumers or developers – will be dictated by their commercial interests in 2025. Of course, one of the more interesting subplots over the next ten years will be whether we see a resistance to this standardization. Perhaps we might even see a resurgence of a more radical and vocal Open Source culture. Less Specialization, Democratization of Development – Even if our experience of software is defined by huge organizations like Google and Apple, it’s also likely that development will become much simpler. Web components have already done this (just take a look at React and Ember), which means JavaScript web development might well morph into something accessible to all. True, this might mean more mediocrity across the web – but it’s not like we’re all going to be building the same Geocities sites we were making in 2002… Client and Server collapse on each other – this follows on from the last prediction. The reason we’re going to see less specialization in development is that the very process of development will no longer be siloed. We’ll all be building complete apps that simply connect to APIs somewhere in the cloud. Isomorphic codebases will become standard – whether this means we will still be using Node.js is another matter… We’ll be living in a ‘Post-Big Data’ World – The Big Data revolution is over – it’s permanently taken root in every aspect of our lives. By 2025 data will have become so ‘Big’, largely due to the Internet of Things, that we’ll have to start thinking of better ways to deal with it and, of course, understand it. If we don’t, we’re going to be submerged in oceans of dirty data. iOS will become sentient - iOS 30, the 2025 iteration of iOS, will become self-aware and start making decisions for humanity. I welcome this wholeheartedly, never having to decide what to eat for dinner ever again. Special thanks to Ed Gordon, Greg Roberts, Amey Varangaonkar and Dave Barnes for their ideas and suggestions. Let us know your predictions for the future of tech – tweet us @PacktPub or add your comments below. What’s the state of tech today? And what can we expect over the next 12 months? Check out our Skills and Salary Reports to find out.
Read more
  • 0
  • 0
  • 3665

article-image-tic-tac-toe-es6
Liz Tom
17 Feb 2016
5 min read
Save for later

Tic Tac Toe in ES6

Liz Tom
17 Feb 2016
5 min read
First off, what is ES6? If you're unsure of what ES6 is, it's the new release of JavaScript and comes with some features. Most likely you're using ES5 JavaScript. ES6 makes the first major update to the language since 2009 so you can imagine some people are very excited. There are plenty of things to get super excited about with release of ES6, but one thing I'd like to cover today is the introduction of Classes! If you want to try ES6 out, you can try out Babel. Babel compiles your ES6 code for you so you can see all your neat ES6 stuff without worrying about browser support. I'm not going to go through the entire tutorial on how to build Tic Tac Toe, but will focus on ES6. Some Background So I've heard a lot of folks describe JavaScript object oriented programming to me and the word prototypal or prototype always comes up. I know that I use the word prototype to create objects but what does that really mean? I think Mozilla does a great job at describing JavaScript so I'm going to go with their explanation: Prototype-based programming is an OOP model that doesn't use classes, but rather it first accomplishes the behavior of any class and then reuses it (equivalent to inheritance in class-based languages) by decorating (or expanding upon) existing prototype objects. (Also called classless, prototype-oriented, or instance-based programming.) Here’s Mozilla’s introduction to OOP in JavaScript. The word prototype always threw me off so I had a really hard time grasping the idea of object oriented programming when I was first introduced to it. After taking a little break from JavaScript and learning OOP principles in another language, I came back and suddenly had a much better understanding of what was happening. Classes The new keyword Class is really just syntactial sugar over the protype syntax in previous releases of JavaScript. Classes make object oriented programming in JavaScript easier to read. It doesn't change the fact that JavaScript uses a prototype based object oriented programming model. Tic Tac Toe We're going to write a super basic version of Tic Tac Toe with ES6 just so you can see some of the differences in syntax. So in Tic Tac Toe there are several elements that we need. A board, a game, some players and spaces. In ES5 when I wanted to make a player object I did the following: var Player = function(name, marker){ this.name = name; this.marker = marker; }; In ES6 the same thing looks like this: class Player { constructor(options) { this.name = options.name; this.marker = options.marker; } } Cool! So far, there's no big difference and I'm actually writing more code using ES6 than I would if I was using ES5. You'll see a bigger difference when writing methods. So let's make a space next. In ES5 I would have made a space like this: var Space = function(xCoord, yCoord){ this.xCoord = xCoord; this.yCoord = yCoord; }; Space.prototype = { markSpace: function(player){ if( !this.marked ) { this.marked = player; return true; } } }; The same thing in ES6 looks like this: class Space { constructor(options) { this.xCoord = options.xCoord; this.yCoord = options.yCoord; } markSpace(player) { if ( !this.marked ) { this.marked = player; return true; } } } When we want to make an instance of a class in ES6 it's the same as ES5. let player1 = new Player('Liz', 'X') Pretty cool, right? Subclassing Another neat feature of ES6 is that you can subclass very easily. Let's say we have another type of space that we'd like to implement. A FancyPlayer! In ES5 you'd have to that by: var Player = function(name, marker){ this.name = name; this.marker = marker; }; Player.prototype.playerInfo = function() { return 'Player Name: ' + this.name + ' Player Marker: ' + this.marker; } function FancyPlayer(xCoord, yCoord, fancyPlayerProperty) { Space.call(this, xCoord, yCoord); this.fancyPlayerProperty = fancyPlayerProperty; } FancyPlayer.prototype = Object.create(Space.prototype); FancyPlayer.prototype.constructor = FancyPlayer; FancyPlayer.prototype.playerInfo = function (player) { return Player.prototype.playerInfo.call(this) + ' Fancy: ' + this.fancyPlayerProperty; }; With ES6 you can subclass like this: class FancyPlayer extends Player { constructor(xCoord, yCoord, fancyPlayerProperty) { super(xCoord, yCoord); this.fancyPlayerProperty = fancyPlayerProperty; } describe() { return super.playerInfo() + ' Fancy: ' + this.fancyPlayerProperty; } } Conclusion ES6 is adding a lot of great things and making syntax a lot cleaner. The class keyword doesn't change much but will hopefully make your code cleaner to read and maybe make it easier for those who are struggling with object oriented programming in JavaScript to have an easier time. I know when I was first learning object oriented principles, the JavaScript syntax really confused me. So hopefully you have fun trying out ES6 syntax! To learn more about the world of OOP check out our Introduction to Object-Oriented Programming using Python, JavaScript, and C# - a perfect jumping point to using OOP for any situation. About the author Liz Tom is a Software Developer at Pop Art, Inc in Portland, OR.  Liz’s passion for full stack development and digital media makes her a natural fit at Pop Art.  When she’s not in the office, you can find Liz attempting parkour and going to check out interactive displays at museums.
Read more
  • 0
  • 0
  • 3661
article-image-manage-your-apps-without-losing-your-mind-0
Michael Herndon
10 Dec 2015
6 min read
Save for later

Manage your apps without losing your mind

Michael Herndon
10 Dec 2015
6 min read
Setting up a new computer, reinstalling windows, updating software, or managing the apps on multiple computers is a hassle. Even after the creation of app stores, like the Windows store, finding and managing the right apps for your Windows desktop is still a tedious chore. Work is piling up, and the last thing you need to worry about is hunting down and reinstalling your favorite apps after a recent computer crash. Ain't Nobody got time for that. Thankfully there is already a solution for your app management woes called package managers. It's not the most intuitive name but the software gets the job done. If you're wondering why you haven't heard of package managers, they are a poorly marketed tool. Even some software developers that I have met over the years are unaware of their existence. All of which begs the question, what is a package manager? What is a package manager? It's a system of applications that control the installation, updates, and removal process of packages on a platforms such as Windows, OSX. Packages can bundle one or more applications or parts of applications and include the scripts to manage these items. You could think of it as an app store that includes the apps that your app store forgot to add for your desktop. The Linux world has had package managers like apt-get and yum for years. Macs have a package manager called homebrew. For Windows, there is a tool called Chocolatey. Chocolatey stands for Chocolate nugget, a play on words that acknowledges that the package manager is built on top of a Microsoft technology called nugget. It can install applications like Evernote, OneDrive, OneNote, Google Chrome from within one location. What are the benefits of package managers? The benefits of package managers are repeatability, speed, and scale. Repeatability, a package can be installed on the same machine multiples times or multiple machines. Speed, a package manager makes it easy to find, download, and install the software within a few keystrokes. Scale, a package manager can be used to apply the same software across multiple computers from a home network to a whole company. It can also install multiple packages at once. Let us assume that that a company wants to install Google Chrome, Paint.Net, and custom company apps. The apps require specific installations of Java and .NET. A package manager can be used to install all those items. The company will have to create their packages for their custom apps. The packages will need to specify dependencies on Java and.NET. When the package manager detects a required dependency, the dependency is installed before the app is. The package manager can apply a required list of programs to all the machines in an organization. The package manager is also capable of updating the applications. For the remainder of the article, I will focus on using Chocolatey as a the package manager for Windows. Get Chocolatey To run chocolatey, you need a program called Powershell. Powershell exists on Windows 7 and above or Windows Server 2008 and above. Open Powershell from the start menu. Go to chocolatey.org, copy the installion code, iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1')). Paste the code into the blue shell. PS C:> iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1')) Press enter. This will install chocolatey on your machine. Customize the Chocolatey Install Chocolatey has two configuration options for installation. $env:ChocolateyInstall will instruct chocolatey to install itself to the specified path. The default is c: $env:ChocolateyBinRoot will instruct chocolate to install programs that folder based like PHP or MySQL to install at the given location. The default is c: For example, let us assume the company wishes to customize chocolatey's install. It wants Chocolatey to exist in the folder c:. It also wants to the bin root folder to use c:. Type or copy $env:ChocolateyInstall = "c:optchocolatey"; into Powershell and hit enter. PS C:> $env:ChocolateyInstall = "c:optchocolatey"; Type or copy $env:ChocolateyBinRoot = "c:optchocolatey"; into Powershell and hit enter. PS C:> $env:ChocolateyBinRoot = "c:optchocolatey"; Copy the Chocolatey install code into Powershell and then hit enter. PS C:> iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1')) Use Chocolatey After Chocolatey's installation finishes, you can search for packages from the command line. List all programs with GoogleChrome in the name. To list packages, you will use the list command. Type choco list GoogleChrome into Powershell and hit Enter. PS C:> choco list GoogleChrome List all packages Type choco list into Powershell and hit enter. PS C:> choco list List all packages and write them to a file Type choco list > c:usersmhernDesktoppackages.txt into Powershell. Replace "mhern" with the name of your user folder on Windows. Hit enter. PS C:> choco list > c:usersmhernDesktoppackages.txt Install chrome To install a package like GoogleChrome you will use the install command. Type choco install chrome GoogleChrome -y into Powershell and hit enter. The flag, -y, means that you accept the license agreement for the software that you are installing. PS C:> Choco install GoogleChrome -y Install multiple programs Type choco install php mysql -y into Powershell and hit enter. This will install PHP and MySQL onto your windows machine. PHP and MySQL will be installed into c:tools by default. PS C:> choco install chrome PHP MySQL -y Upgrade a program To upgrade a program with chocolatey, you will use the upgrade command. In older versions of chocolatey it was the update command. Type choco upgrade GoogleChrome -y into Powershell and hit enter. Though you really do not need to update GoogleChrome as it will update itself. PS C:> choco upgrade GoogleChrome -y Upgrade all programs Type choco upgrade all -y into Powershell and hit enter. PS C:> choco upgrade all -y Upgrade chocolatey Type choco upgrade chocolatey into Powershell and hit enter. PS C:> choco upgrade chocolatey Uninstall chrome To uninstall a package, use the uininstall command. Type choco uninstall GoogleChrome into Powershell and hit enter. PS C:> choco uninstall GoogleChrome Further reading Now you should be able to find and install applications with ease. If you want to investigate Chocolatey further, I suggest reading through the wiki and familarizing yourself with powershell and running powershell as administrator. Disclosure I supported Chocolatey's kickstarter campaign. About the author Michael Herndon is the head of DevOps at Solovis, creator of badmishka.co, and all around mischievous nerdy guy.
Read more
  • 0
  • 0
  • 3590

article-image-why-uber-is-wrong
Ed Gordon
28 Nov 2014
5 min read
Save for later

Why Uber was wrong (and I’m right…)

Ed Gordon
28 Nov 2014
5 min read
I recently stumbled across an old Uber blog that offended me. It was a puerile clickbait article that claimed to be able to identify the areas in a city that were more prone to engage in one-night stands. It also offended me in its assumptions about the data it presented to support its claim. This blog isn’t about the group of people that it calls “RoGers” (“Ride of Glory-ers”), it’s about how when you go looking for results, you can find whatever data you need. If you want to read it in its full glory, the original Uber article can be found here. So. Some facts. People who have one night stands (according to Uber): Often do it on a Friday and Saturday Leave after 10pm and return within 4-6 hours In San Francisco, they do it from the highlighted locations–the darker areas are where the proportion of RoGers outweighs the proportion of non-RoGers From the A–B locations and a somewhat artificial timeframe, we’re lead to believe that we’re looking at the sinners of San Francisco. Let’s have a look at an alternate reality where there are fewer people having Uber-fuelled sex. My theory is this; young people go out. They go out after the hours of 10pm, and return before 6am. They go out to drink within a few miles of their home. They do this, as all good employees, on a Friday and Saturday night because, well, who likes working with a hangover? I will call them ReGs (Regular Everyman Guys/Girls). Locating my demographic To establish where people actually live, I took a sample from 91,000 apartment listings from datasf and ran it through Google Map Engine, which lets n00bs like me create maps: Google Map Engine only lets me do 500 rows of data for free, so it’s limited, but you can see that most people live in north-eastern San Francisco. This will come as no surprise to people who live there, but as I’ve only ever seen San Francisco in films (Homeward Bound et al.), I thought it prudent to prove it. Basically, we can establish that people live where Uber say they live. Lock the doors, this is going to be a wild one. Who’s living here though? I think the maximum age of decency for a 6-hour drinking bender is probably about 33. So I needed to know that a large portion of the RoGer area was full of young people occupying these apartments. Finding an age map of a city was really difficult, but after Googling “San Franciso Age Map” I found one at http://synthpopviewer.rti.org/. The blue represents ages 15–34. Red is 55-64. Young people live in San Francisco! Who knew? More specifically, the “heat map” areas seem to match up nicely to the Uber data: But where do they go?! A city full of young people. What do they do at night? Are they really RoGers? There’s an article from growthhackers that says the no.1 reason for Uber use (and subsequent) growth is “Restaurants and Nightlife”. It seems like a reasonable assumption that people want to drink rather than drive, so I mapped out the restaurants in San Fran (hoping that restaurants = bars and clubs too). Again, there’s a clear grouping around similar areas. Young people live in San Francisco. They are surrounded by restaurants and bars. I’m using my own experiences with the body of 27,000 Birmingham students, and of being a worker in my mid-20s, that most go out on a Friday and Saturday night and that they do it after 10pm (normally about 11pm) and return at around 3am. They aren’t going out for Rides of Glory, they’re going out to practice expressive dance until the early hours. What it all means My narrative still smells a bit right? I’m ignoring that half of the “young people” in my sample can’t drink, I’m assuming that the people who can actually go out at night, and I’m assuming that my restaurant map also represents bars and nightclubs. The data about apartment listings was basically pointless. And the same can be said for data of the RoGers of Uber. We’re told that because a young city, full of workers and students take trips between 10pm and 6am, they’re all playing away. It’s an analysis as full of assumptions as my own. Uber knew what they wanted (more clicks) before they came to their conclusion. When you do this in the real world, it can lead to big mistakes. Data-driven decisions aren’t a half and half approach. If you choose that path, you must be dedicated to it–get all the possible relevant data points, and allow people who know what these data points mean to come up with conclusions from them.   When you ask a question before you get the data, you end up with what you want. In this scenario, I ended up with ReGs. Uber ended up with RoGers. I think I’m more correct than they are because their conclusion is stupid. But we’re both likely to be wrong in the end. We went in to the big world of data with a question (what would make a good blog), and ended up with clouded judgment. When you’re investing the future of your company based on clouded data, this approach would have bigger implications than producing a clickbait blog. Next time, I’ll get the data first and then let that tell me what will make a good blog.
Read more
  • 0
  • 0
  • 3573

article-image-falling-love-chatbots
Ben James
06 Dec 2016
5 min read
Save for later

Falling in love with chatbots

Ben James
06 Dec 2016
5 min read
I recently watched the excellent film Her by Spike Jonze. Without giving too much away, the lead character slowly becomes enamoured with his new, emotionally-capable OS. It got me thinking, "How far off are chatbots from being like this?". Lately, countless businesses are turning to chatbots as a way of more meaningfully interacting with their users. Everyone from Barclays to UPS is trying bots in the hope of increasing engagement and satisfaction. Why talk to a suit over the phone when you can message a bot whenever you want? They'll always reply to you and they can read your personal data just like magic, solving your problems in a timely and friendly way. However, one inherant trend I've noticed is that everything is task based. Platforms like Amazon's Alexa, api.ai, and wit.ai are all based around resolving a user query to an intent and then acting on it. That's great, and works rather well nowadays, but what if you want to tell a story? What if you want to make your user feel grief, happiness, or guilt? Enter SuperScript SuperScript is a chatbot framework designed around conversations, written from the bottom-up in lovely asynchronous Node.js. Unlike other frameworks, it puts more emphasis on how a typical human conversation might go over more traditional methods like intent mapping. Let's find out a little more about how it works. To get started, you can run the following in your terminal. Note that we're using the alpha version at the moment, which is nearing completion but is still liable to change: npm install -g superscript@alpha bot-init myBot// Creates a bot in myBot directory cd myBot npm install parse npm run build npm run start Then, in another tab, run: telnet localhost 2000 And you'll find that you can talk to a very simple script that'll say “hello!”. Let's see how that script works. Open chat/main.ss and take a look. Triggers (what you say to the bot) are prefixed with the + symbol, while replies (what it says back to you) are prefixed with the - symbol, like this: + Hi - Greetings, human! How are you? Under the hood, Hi gets transformed into ~emohello, which matches a number of different greetings like Hey, Greetings or just the original Hi. So, whenever you greet the bot, it'll say Greetings, human!. Conversations Conversations are the first of the building blocks of true power of SuperScript, and look something like this: + * good * % Greetings, human! How are you? - That's great to hear! The % references a previous reply that the bot must have said in order for this bit of chat to be considered. In this case, SuperScript will only look at this gambit (a trigger plus a reply) if the last bot reply was Greetings, human! How are you?. Let's imagine it was this. Note how we have a star “*” in the trigger this time. This matches anything. Here, if the user says something like Good thanks orI'm feeling good, then we'll match the trigger and send the response That's great to hear!. But, if the user doesn't say anything with good in it, we're not going to match anything at present. So let's write a new trigger to catch anything else. We can actually go a step further and use a plugin function to analyse the user's input: + (*) % Greetings, human! How are you? - ^sentiment(<cap1>) Here, we have a conversation just as we did earlier. But, now we have a plugin function,sentiment,which will analyze the captured input <cap1> (what the user said to the bot) and respond with an appropriate message. Let's write a plugin function using the npm library sentiment. Create a new file, say, sentiment.js, and stick it into the plugins folder in your bot directory. Inside this file, write something like this: import sentimentLib from 'sentiment'; const sentiment = functionsentiment(message, callback) { const score = sentimentLib(message).score; if (score >2) { returncallback(null, "Good for you."); } elseif(score < -2) { returncallback(null, "I'm glad you share my angst."); } returncallback(null, "I'm pretty so-so too."); }; exportdefault { sentiment }; Now, we can access our plugin from our bot, so give it a whirl. Pretty neat, huh? Topics The other part of SuperScript that we get a load of power and flexibility from is its support of topics. In normal conversations, humans tend to spend a lot of time on a specific subject before moving onto another subject. For example, this hipster programming bot will talk about programming until you don't move on to another topic: > topic programming + My favourite language is Brainfuck - Mine too! I love its readability! + My favourite language is (*) - Hah! <cap1> SUCKS! Brainfuck is where it's at! + spaces or tabs - Why not mix both? + * 1337 * - You speak 13375P34K too?! < topic Once you've hit any of these triggers, you're in the programming topic and won't get out unless you say something that doesn't match any of the triggers within the topic. Topics and conversations are the foundations of building conversational interfaces, and you can build a whole lot of interesting experiences around them. We're really at the beginning of these types of chatbots, but as interest grows in interactive stories, they'll only get better and better. About the author Ben James is currently the Technical Director at To Play For, creating games, interactive stories, and narratives using artificial intelligence. Follow us on Twitter at @ToPlayFor.
Read more
  • 0
  • 0
  • 3553
article-image-you-down-oop
Liz Tom
11 Nov 2015
7 min read
Save for later

You down with OOP?

Liz Tom
11 Nov 2015
7 min read
I just did a quick Google search, I'm not the first one to think I'm clever. But I couldn't think of a better name. So a little background on me. I'm still a beginner and I'm still learning a lot. I've been a Software Developer for about a year now and some basic concepts are just starting to become more clear to me. So I'd look at this as a very beginning intro to OOP and hopefully you'll be so interested by what I have to say you'll explore further! What is OOP? OOP stands for Object-Oriented Programming. Why would you want to learn about this concept? If done correctly, it makes code more maintainable in the long run. It's easier to follow and easier for another dev to jump in to the project, even if that other dev is yourself in the future. Huh? What's the Difference? Object-Oriented Programming vs Functional vs Procedural. I've always understood each of these concepts by themselves but not how they fit in with each other or what the advantages of choosing one type over the other. These are not the only programming paradigms out there but they are very popular paradigms. Before we can really dive into OOP it's important to understand how it differs from other programming paradigms. Object-Oriented: Object-Oriented programming is a type of abstraction done by using classes and objects. Classes define a data structure and can send and receive messages and also manipulate data. Objects are the instances of these classes. Functional: Functional programming is a set of functions. These functions take input and put out output, ideally there is no internal state that would affect the output of an input. This means no matter what input you put in, you should always receive the same output. Procedural: Procedural is probably what you first learned. This involves telling the computer a set of steps that it should execute. Polymorphism It's morphin' time! I've heard this word a lot but I was too nervous to ask what it meant. It's okay, I'm here to help you out. This is a concept found in Object-Oriented Programming, it's a way to have similar objects act differently. The classic example is using a class Animal. You can have many different animals and they can all have the method speak. However, cats and dogs will say different things when they speak. so dog.speak() would result in 'woof' while cat.speak() would result in 'meow'. Inheritance Objects can inherit features from other objects. If you have an object that shares a lot of similar functionality as another object but varies slightly, you might want to use inheritance. But don't abuse inheritance, just because you can do it doesn't mean this is something you should be using everywhere. OOP is used to help make more maintainable code, if you have classes inheriting all over the place with each other, it ends up making your code less maintainable. Encapsulation Encapsulation is a fancy word describing private methods and variables. These methods and variables should only be avaliable to the class they belong to, which helps to make sure that you aren't doing anything with a different class that will have unintended results. While in Python there is no actual privacy, there is a convention to help prevent you from using something you shouldn't be touching. The convention is prepending variable names with an underscore. Python! So now you have a little background on the basics of OOP, what does this look like in the real world? I'll use Python as an example in this case. Python is a multi-paradigm language that is great for object-oriented programming and that's fairly easy to read, so even if you haven't seen any Python in your life before, you should be able to get a handle on what's going on. So we're going to make a zoo. Zoos have animals, so in order to create an animal, let's start with an Animal class. class Animal(): def __init__(self, name): self.name = name All we've done here is make an animal class and set the name. What can differ between animals besides their name? Number of legs? Color? The food they eat? There are so many things that might be different but let's just pick a few. Don't worry too much about syntax. If you like the look of Python, I recommend checking it out because Python is fun to write! class Animal(): def __init__(self, name, legs, color, food): self.name = name self.legs = legs self.color = color self.food = food Now I can make any animal I want. dog = Animal('dog', 4, 'black', 'dog food') cat = Animal('cat', 4, 'orange', 'cat food') But maybe we should take care of these animals? All of the animals at this zoo happen to get hungry based on the number of legs they have and how long it's been since their last feeding. When their hunger levels reach an 8 on a scale of 1-10 (10 being the hungriest they've ever felt in their life!), we'll feed them! First we need to allow each animal to store its hunger value. class Animal(): def init(self, name, legs, color, food, hunger): self.name = name self.legs = legs self.color = color self.food = food self.hunger = hunger Now we can add a method (a name for function that belongs to a class) to the class to see if we should feed the animal and if we need to feed it, we'll feed it. class Animal(): def init(self, name, legs, color, food, hunger): self.name = name self.legs = legs self.color = color self.food = food self.hunger = hunger def time_to_feed(self, hours_since_meal): self.hunger = 0.3 * (hours_since_meal * self.legs) + self.hunger if self.hunger >= 8: print('Time to feed the ' + self.name + ' ' + self.food + '!') else: print(self.name + ' is full.') dog = Animal('dog', 4, 'brown', 'dog food', 8) dog.time_to_feed(8) cat = Animal('cat', 4, 'pink', 'cat food', 2) cat.time_to_feed(2) Ok but why is this good? Well let's say I didn't do this in an object-oriented manner. I could do it this way: dogName = 'dog' dogLegs = 4 dogColor = 'brown' dogFood = 'dog food' dogHunger = 8 catName = 'cat' catLegs = 4 catColor = 'pink' catFood = 'cat food' catHunger = 2 Before when I wanted to add a hunger level, I just needed to add more parameters to my __init__ method. Now I need to make sure I'm adding in parameters to each object individually. I'm already tired. I could put those all in arrays but then I'm relying on the order of multiple arrays to always stay in the order I expect them in. Then I create the same time_to_feed function. But now it's not clear what time_to_feed is being used for. A future developer joining this project might have hard time figuring out that you meant this for your animals. I hope you enjoyed this little introduction into Object-Oriented Programming. If you want to jump into learning more JavaScript, why not start by finding out the difference between mutability and immutability? Read on now About the author Liz Tom is a Creative Technologist at iStrategyLabs in Washington D.C. Liz's passion for full stack development and digital media makes her a natural fit at ISL. Before joining iStrategyLabs, she worked in the film industry doing everything from mopping blood off of floors to managing budgets.
Read more
  • 0
  • 0
  • 3550

article-image-pep-8-beautiful-code-and-tyranny-guidelines
Dave Barnes
20 Oct 2015
4 min read
Save for later

PEP 8, beautiful code, and the tyranny of guidelines

Dave Barnes
20 Oct 2015
4 min read
I’m no Python developer, but I learned a great deal from Raymond Hettinger’s Pycon 2015 presentation, Beyond PEP 8: Best practices for beautiful intelligible code. It could as well have been called ‘the danger of standards’: For those not in the know (and I wasn’t), PEP 8 is a set of style guidelines for writing presentable Python code. Hettinger takes the view that it’s good, but not perfect. But even so, its mere existence and misuse is enough to cause all kinds of problems. Bad rules and minor atrocities The first problem with any set of guidelines is the most obvious. No set of guidelines short enough to write down will get it right in every situation. The weakest part of PEP 8 is its rule on line lengths. 79 characters is just too short for many purposes, especially with indentation, and especially when writing debugging scripts. But once the rules exist, and your code is judged against them, you better follow them. So how will you get your code down to 79 characters per line? There are plenty of options, all PEP 8 compliant and they all make the code worse: Use shorter variable names. Break the line in some arbitrary place. Switch to two-space indentation instead of four-space. Hettinger calls these ‘minor atrocities’ — expedient little sins that make the world worse. There are two problems at work here. First, 79 characters is just not enough for a lot of purposes. These days, 90 characters might be better. But beyond that, any absolute number is dangerous. Developers need to remember that lines above a certain length are pushing it (whether 80 or 90 characters). But they should know that sometimes a good long line is better than 2 bad short ones. As George Orwell said about his own guidelines: break any of them sooner than doing anything outright barbarous. Getting PEP 8'ed Once a standard exists, there’s a great temptation to impose that standard on other people’s code arbitrarily. You might even believe you’re doing the world a favor. The PEP 8 standards warn against this with a quote from Ralph Waldo Emmerson: “A foolish consistency is the hobgoblin of little minds”. But that doesn’t stop eager PEP 8ers diving in and complyifying other developers’ code. Once we know how to do something — and we believe it’s useful and productive — we’ll be tempted to jump in and do it wherever we can. Thus once somebody knows how, they can’t help but perpetrate PEP 8. With any guideline: follow it yourself sooner than impose it on others. Missing the gorilla But by far the biggest danger of guidelines is they can distract us from what really matters. We can pay great attention to following the guidelines and miss the most important things. If the rules of PEP 8, or any guideline, become our criteria then we limit our judgment — our perception — to the issues covered by the guideline. As an example Hettinger PEP 8ifies a chunk of Python code, making substantial improvements to its readability. But not one person in the audience notices the real issue: this is not really good Python code, it’s a Java approach copied into Python. Once that issue is seen, it’s addressed… and the code quality and readability is transformed, to the point where even I can understand it (and can also understand the appeal of Python, because it uses Python’s unique features). Being Pythonic Behind all this is a simple principle. Good Python code is Good. Python. Code. It isn’t good Java code or good C code. It has to be judged against the standard of what good Python looks like. This is perhaps the key to quality Python and quality everything. Quality is not in adherence to a long list of guidelines. It comes from having a clear idea of what good is, and bringing your work as close to that idea/ideal as you can. Finishing One other thing I learned from the video: how to end a talk if you overrun. Raymond handles the MC with such grace, and you’ll have to watch the video to the end to learn from that.
Read more
  • 0
  • 0
  • 3548
Modal Close icon
Modal Close icon