Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-what-we-learned-oracle-openworld-2017
Amey Varangaonkar
06 Oct 2017
5 min read
Save for later

What we learned from Oracle OpenWorld 2017

Amey Varangaonkar
06 Oct 2017
5 min read
“Amazon’s lead is over.” These famous words by the Oracle CTO Larry Ellison in the Oracle OpenWorld 2016 garnered a lot of attention, as Oracle promised their customers an extensive suite of cloud offerings, and offered a closer look at their second generation IaaS data centers. In the recently concluded OpenWorld 2017, Oracle continued on their quest to take on AWS and other major cloud vendors by unveiling a  host of cloud-based products and services. Not just that, they have  juiced these offerings up with Artificial Intelligence-based features, in line with all the buzz surrounding AI. Key highlights from the Oracle OpenWorld 2017 Autonomous Database Oracle announced a totally automated, self-driving database that would require no human intervention for managing or fine-tuning the database. Using machine learning and AI to eliminate human error, the new database guarantees 99.995% availability. While taking another shot at AWS, Ellison promised in his keynote that customers moving from Amazon’s Redshift to Oracle’s database can expect a 50% cost reduction. Likely to be named as Oracle 18c, this new database is expected to be shipped across the world by December 2017. Oracle Blockchain Cloud Service Oracle joined IBM in the race to dominate the Blockchain space by unveiling its new cloud-based Blockchain service. Built on top of the Hyperledger Fabric project, the service promises to transform the way business is done by offering secure, transparent and efficient transactions. Other enterprise-critical features such as provisioning, monitoring, backup and recovery are also some of the standard features which this service will offer to its customers. “There are not a lot of production-ready capabilities around Blockchain for the enterprise. There [hasn’t been] a fully end-to-end, distributed and secure blockchain as a service,” Amit Zavery, Senior VP at Oracle Cloud. It is also worth remembering that Oracle joined the Hyperledger consortium just two months ago, and the signs of them releasing their own service were there already. Improvements to Business Management Services The new features and enhancements introduced for the business management services were one of the key highlights of the OpenWorld 2017. These features now empower businesses to manage their customers better, and plan for the future with better organization of resources. Some important announcements in this area were: Adding AI capabilities to its cloud services - The Oracle Adaptive Intelligent Apps will now make use of the AI capabilities to improve services for any kind of business Developers can now create their own AI-powered Oracle applications, making use of deep learning Oracle introduced AI-powered chatbots for better customer and employee engagement New features such as enhanced user experience in the Oracle ERP cloud and improved recruiting in the HR cloud services were introduced Key Takeaways from Oracle OpenWorld 2017 With the announcements, Oracle have given a clear signal that they’re to be taken seriously. They’re already buoyed by a strong Q1 result which saw their revenue from cloud platforms hit $1.5 billion, indicating a growth of 51% as compared to Q1 2016, Here are some key takeaways from the OpenWorld 2017, which are underlined by the aforementioned announcements: Oracle undoubtedly see cloud as the future, and have placed a lot of focus on the performance of their cloud platform. They’re betting on the fact that their familiarity with the traditional enterprise workload will help them win a lot more customers - something Amazon cannot claim. Oracle are riding on the AI wave and are trying to make their products as autonomous as possible - to reduce human intervention and human error, to some extent. With enterprises looking to cut costs wherever possible, this could be a smart move to attract more customers. The autonomous database will require Oracle to automatically fine-tune, patch, and upgrade its database, without causing any downtime. It will be interesting to see if the database can live up to its promise of ‘99.995% availability’. Is the role of Oracle DBAs going to be at risk, due to the automation? While it is doubtful that they will be out of jobs, there is bound to be a significant shift in their day to day operations. It is speculated that the DBAs would require to spend less time on the traditional administration tasks such as fine-tuning, patching, upgrading, etc. and instead focus on efficient database design, setting data policies and securing the data. Cybersecurity has been a key theme in Ellison’s keynote and the OpenWorld 2017 in general. As enterprise Blockchain adoption grows, so does the need for a secure, efficient digital transaction system. Oracle seem to have identified this opportunity, and it will be interesting to see how they compete with the likes of IBM and SAP to gain major market share. Oracle’s CEO Mark Hurd has predicted that Oracle can win the cloud wars, overcoming the likes of Amazon, Microsoft and Google. Judging by the announcements in the OpenWorld 2017, it seems like they may have a plan in place to actually pull it off. You can watch highlights from the Oracle OpenWorld 2017 on demand here. Don’t forget to check out our highly popular book Oracle Business Intelligence Enterprise Edition 12c, your one-stop guide to building an effective Oracle BI 12c system.  
Read more
  • 0
  • 0
  • 6407

article-image-ethereum-programming-update
Packt Publishing
17 Sep 2017
1 min read
Save for later

Ethereum Programming Update

Packt Publishing
17 Sep 2017
1 min read
18th September 2017 A book entitled, “Ethereum Programming” purporting to be by Alex Leverington was erroneously advertised for pre-order on several websites (including Amazon and our own website). This was our mistake. The book does not exist, and we take full responsibility for our mistake in suggesting that the book does exist and would be launched on 4 August 2017. We sincerely apologise for any disappointment and inconvenience caused by this unintentional oversight. Followers of Alex Leverington are directed to https://nessence.net/about/, where they can find information about his work, including any forthcoming titles. Sincerely, Packt Publishing Ltd
Read more
  • 0
  • 0
  • 6346

article-image-reducing-cost-big-data-using-statistics-and-memory-technology-part-2
Praveen Rachabattuni
06 Jul 2015
6 min read
Save for later

Reducing Cost in Big Data using Statistics and In-memory Technology - Part 2

Praveen Rachabattuni
06 Jul 2015
6 min read
In the first part of this two-part blog series, we learned that using statistical algorithms gives us a 95 percent accuracy rate for big data analytics, is faster, and is a lot more beneficial than waiting for the exact results. We also took a look at a few algorithms along with a quick introduction to Spark. Now let’s take a look at two tools in depth that are used with statistical algorithms: Apache Spark and Apache Pig. Apache Spark Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, and Python, as well as an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming. At its core, Spark provides a general programming model that enables developers to write applications by composing arbitrary operators, such as mappers, reducers, joins, group-bys, and filters. This composition makes it easy to express a wide array of computations, including iterative machine learning, streaming, complex queries, and batch processing. In addition, Spark keeps track of the data that each of the operators produces, and enables applications to reliably store this data in memory. This is the key to Spark’s performance, as it allows applications to avoid costly disk accesses. It would be wonderful to have one tool for everyone, and one architecture and language for investigative as well as operational analytics. Spark’s ease of use comes from its general programming model, which does not constrain users to structure their applications into a bunch of map and reduce operations. Spark’s parallel programs look very much like sequential programs, which make them easier to develop and reason about. Finally, Spark allows users to easily combine batch, interactive, and streaming jobs in the same application. As a result, a Spark job can be up to 100 times faster and requires writing 210 times less code than an equivalent Hadoop job. Spark allows users and applications to explicitly cache a dataset by calling the cache() operation. This means that your applications can now access data from RAM instead of disk, which can dramatically improve the performance of iterative algorithms that access the same dataset repeatedly. This use case covers an important class of applications, as all machine learning and graph algorithms are iterative in nature. When constructing a complex pipeline of MapReduce jobs, the task of correctly parallelizing the sequence of jobs is left to you. Thus, a scheduler tool such as Apache Oozie is often required to carefully construct this sequence. With Spark, a whole series of individual tasks is expressed as a single program flow that is lazily evaluated so that the system has a complete picture of the execution graph. This approach allows the core scheduler to correctly map the dependencies across different stages in the application, and automatically parallelize the flow of operators without user intervention. With a low-latency data analysis system at your disposal, it’s natural to extend the engine towards processing live data streams. Spark has an API for working with streams, providing exactly-once semantics and full recovery of stateful operators. It also has the distinct advantage of giving you the same Spark APIs to process your streams, including reuse of your regular Spark application code. Pig on Spark Pig on Spark combines the power and simplicity of Apache Pig on Apache Spark, making existing ETL pipelines 100 times faster than before. We do that via a unique mix of our operator toolkit, called DataDoctor, and Spark. The following are the primary goals for the project: Make data processing more powerful Make data processing more simple Make data processing 100 times faster than before DataDoctor is a high-level operator DSL on top of Spark. It has frameworks for no-symmetrical joins, sorting, grouping, and embedding native Spark functions. It hides a lot of complexity and makes it simple to implement data operators used in applications like Pig and Apache Hive on Spark. Pig operates in a similar manner to big data applications like Hive and Cascading. It has a query language quite akin to SQL that allows analysts and developers to design and write data flows. The query language is translated in to a “logical plan” that is further translated in to a “physical plan” containing operators. Those operators are then run on the designated execution engine (MapReduce, Apache Tez, and now Spark). There are a whole bunch of details around tracking progress, handling errors, and so on that I will skip here. Query planning on Spark will vary significantly from MapReduce, as Spark handles data wrangling in a much more optimized way. Further query planning can benefit greatly from ongoing effort on Catalyst inside Spark. At this moment, we have simply introduced a SparkPlanner that will undertake the conversion from a logical to a physical plan for Pig. Databricks is working actively to enable Catalyst to handle much of the operator optimizations that will plug into SparkPlanner in the near future. Longer term, we plan to rely on Spark itself for logical plan generation. An early version of this integration has been prototyped in partnership with Databricks. Pig Core hands off Spark execution to SparkLauncher with the physical plan. SparkLauncher creates a SparkContext providing all the Pig dependency JAR files and Pig itself. SparkLauncher gets an MR plan object created from the physical plan. At this point, we override all the Pig operators to DataDoctor operators recursively in the whole plan. Two iterations are performed over the plan — one that looks at the store operations and recursively travels down the execution tree, and a second iteration that does a breadth-first traversal over the plan and calls convert on each of the operators. The base class of converters in DataDoctor is a POConverter class and defines the abstract method convert, which is called during plan execution. More details of Pig on Spark can be found at PIG4059. As we merge with Apache Pig, we need to focus on the following enhancements to further improve the speed of Pig: Cache operator: Adding a new operator to explicitly tell Spark to cache certain datasets for faster execution Storage hints: Allowing the user to specify the storage location of datasets in Spark for better control of memory YARN and Mesos support: Adding resource manager support for more global deployment and support Conclusion In many large-scale data applications, statistical perspectives provide us with fruitful analytics in many ways, including speed and efficiency. About the author Praveen Rachabattuni is a tech lead at Sigmoid Analytics, a company that provides a real-time streaming and ETL framework on Apache Spark. Praveen is also a committer to Apache Pig.
Read more
  • 0
  • 0
  • 6328

article-image-where-he-develops-innovative-prototypes-to-support-company-growth-he-is-addicted-to-learning
Xavier Bruhiere
26 Sep 2016
1 min read
Save for later

where he develops innovative prototypes to support company growth. He is addicted to learning

Xavier Bruhiere
26 Sep 2016
1 min read
and practicing high-intensity sports. "
Read more
  • 0
  • 0
  • 6315

article-image-whats-difference-between-data-scientist-and-data-analyst
Erik Kappelman
10 Oct 2017
5 min read
Save for later

What's the difference between a data scientist and a data analyst

Erik Kappelman
10 Oct 2017
5 min read
It sounds like a fairly pedantic question to ask what the difference between a data scientist and data analyst is. But it isn't - in fact, it's a great question that illustrates the way data-related roles have evolved in businesses today. It's pretty easy to confuse the two job roles - there's certainly a lot of misunderstanding on the difference between a data scientist and a data analyst even within a managerial environment. Comparing data analysts and data scientists Data analysts are going to be dealing with data that you might remember from your statistics classes. This data might come from survey results, lab experiments of various sorts, longitudinal studies, or another form of social observation. Data may also come from observation of natural or created phenomenons, but the data’s form would still be similar. Data scientists on the other hand, are going to looking at things like metadata from billions of phone calls, data used to forecast Bitcoin prices that have been scraped from various places around the Internet, or maybe data related to Internet searches before and after some important event. So their data is often different, but is that all? The tools and skillset required for each is actually quite different as well. Data science is much more entwined with the field of computer science than data analysis. A good data analyst should have working knowledge of how computers, networks, and the Internet function, but they don’t need to be an expert in any of these things. Data analyst really just need to know a good scripting language that is used to handle data, like Python or R, and maybe a more mathematically advanced tool like MatLab or Mathematica for more advanced modeling procedures. A data analyst could have a fruitful career knowing only about that much in the realm of technology. Data scientists, however, need to know a lot about how networks and the Internet work. Most data scientists will need to have mastered HTTP, HTML, XML and SQL as well as scripting languages like Ruby or Python, and also object-oriented languages like Java or C. This is because data scientists spend a lot more time capturing, manipulating, storing and moving around data than a data analyst would. These tasks require a different skillset. Data analysts and data scientists have different forms of conceptual understanding There will also likely be a difference in the conceptual understanding of a data analyst versus a data scientist. If you were to ask both a data scientist and a data analyst to derive and twice differentiate the log likelihood function of the binomial logistic regression model, it is more likely the data analyst would be able to do it. I would expect data analysts to have a better theoretical understanding of statistics than a data scientist. This is because data scientists don’t really need much theoretical understanding in order to be effective. A data scientist would be better served by learning more about capturing data and analyzing streams of data than theoretical statistics. Differences are not limited to knowledge or skillset, how data scientists and data analysts approach their work is also different. Data analysts generally know what they are looking for as they begin their analysis. By this I mean, a data analyst may be given the results of a study of a new drug, and the researcher may ask the analyst to explore and hopefully quantify the impact of a new drug. A data analyst would have no problem performing this task. A data scientist on the other hand, could be given the task of analyzing locations of phone calls and finding any patterns that might exist. For the data scientist, the goal is often less defined than it is for a data analyst. In fact, I think this is the crux of the entire difference. Data scientists perform far more exploratory data analysis than their data analyst cousins. This difference in approach really explains the difference in skill sets. Data scientists have skill sets that are primarily geared toward extracting, storing and finding uses for data. The skill set to perform these tasks is the skill set of a data scientist. Data analysts primarily analyze data and their skill set reflects this. Just to add one more little wrinkle, while calling a data scientist a data analyst is basically correct, calling a data analyst a data scientist is probably not correct. This is because the data scientist is going to have a handle on more of the skills required of a data analyst than a data analyst would of a data scientist. This is another reason there is so much confusion around this subject. Clearing up the difference between a data scientist and data analyst So now, hopefully, you can tell the difference between a data scientist and a data analyst. I don’t believe either field is superior to the other. If you are choosing between which field you would like to pursue, what’s important is that you choose the field that best compliments your skill set. Luckily it's hard to go wrong because both data scientists and analysts usually have interesting and rewarding careers.
Read more
  • 0
  • 0
  • 6272

article-image-how-master-continuous-integration-tools-and-strategies
Erik Kappelman
22 Nov 2016
4 min read
Save for later

How to master Continuous Integration: Tools and Strategies

Erik Kappelman
22 Nov 2016
4 min read
As time moves forward, the development of software is becoming more and more geographically and temporally fragmented. This is due to the globalized world in which we now live, and the emergence of many tools and techniques allowing many types of work, including software development, to be decentralized. As with any powerful tool, software development tools that facilitate the sharing of code by multiple developers simultaneously need to be managed appropriately or the results can be quite negative. Continuous integration is one strategy for managing large software codebases through the development process. Continuous integration is really quite simple on the surface. In order to reduce extra work that accompanies a branch and the mainline of code becoming non-integrable, continuous integration advocates for, well, continuous integration. The basic tenets of continuous integration are fairly well known, but they are worth explicitly mentioning. Projects need to have a code repository; if this isn’t a given in any of your development processes it needs to be. Automating the build of projects can also increase the efficacy of continuous integration. Part of an automated build should also include self-testing in a production level environment. Testing should be performed using the most up-to-date build of a project to ensure the tests are being performed on the correct codebase. All developers need to commit their changes at the absolute least once every working day. These could be considered the basic requirements for a development process that is continuously integrated. While this blog could focus on one specific continuous integration tool, I think an overview of a few tools to get someone started in the right direction is better. This Wikipedia page compares some different continuous integration softwares available. These softwares are available under both proprietary and open licenses. A really great starting tool to learn would be Buildbot. There a few reasons to like this tool. First of all, it's very much open source and completely free. Also, it uses Python as its configuration and control language. Python is an all around good language, and lends itself very well to configuration of other software. Buildbot is a full-fledged continuous integration tool, supporting automated builds and testing and a variety of change notification tools. This all ships as a Python package, meaning its installation and use does not tax resources very much. The tutorial on the Buildbot website will get you up and running and their manual is incredibly detailed. Buildbot is an excellent starting point for someone who is attempting to bring continuous integration into their development process, or someone interested in expanding their skillset. Not everyone is a lover of Python, but many people are lovers of Node.js. For the Node.js aficionado, another continuous integration open-source solution is NCI. NCI, short for Node.js Continuous Integration, is a tool written in and used by Node.js. The JavaScript base for Node is a powerful attraction for many people, especially beginners, who have the most experience coding with JavaScript. Using Node.js does introduce the requirement of having Node.js, which can be onerous, but Node is worth installing if you don’t have it already. If you use Node already, NCI can be installed using npm, because it is a Node package. A basic start-up tutorial for NCI is located here. The documentation is not as clear, or as large as that of Buildbot. This is in part because NCI is part of Node, so many of its plugins and dependencies have separate documentations. NCI is also a bit less powerful than Buildbot when it ships. One of the benefits of NCI is its modularity. Servers can be large and complex or small and simple; it just depends on what the user wants. To end on somewhat of a side-note, some continuous integration tools may be simply too powerful and complex for a given developers needs. I myself work with one other developer on certain projects. These projects tend to be small and the utility of a full-fledged continuous integration solution is really less than the cost or hassle. One example of powerful collaboration software that has many continuous integration elements is GitLab. GitLab could definitely effectively perform as a full-fledged continuous integration solution. The community version of GitLab would be better suited as simply a collaboration tool for smaller projects. Author: Erik Kappelman is a transportation modeler for the Montana Department of Transportation. He is also the CEO of Duplovici, a technology consulting and web design company.
Read more
  • 0
  • 0
  • 6268
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-reducing-cost-big-data-using-statistics-and-memory-technology-part-1
Praveen Rachabattuni
03 Jul 2015
4 min read
Save for later

Reducing Cost in Big Data using Statistics and In-memory Technology - Part 1

Praveen Rachabattuni
03 Jul 2015
4 min read
The world is shifting from private, dedicated data centers to on-demand computing in the cloud. This shift moves the onus of cost from the hands of IT companies to the hands of developers. As your data sizes start to rise, the computing cost grows linearly with it. We have found that using statistical algorithms gives us a 95 percent accuracy rate, is faster, and is a lot more beneficial than waiting for the exact results. The following are some common analytical queries that we have often come across in applications: How many distinct elements are in the data set (that is, what is the cardinality of the data set)? What are the most frequent elements (that is, the “heavy hitters” and “top elements”)? What are the frequencies of the most frequent elements? Does the data set contain a particular element (search query)? Can you filter data based upon a category? Statistical algorithms for quicker analytics Frequently, statistical algorithms avoid storing the original data, replacing it with hashes that eliminate a lot of network. Let’s get into the details of some of these algorithms that can help answer queries similar to those mentioned previously. A Bloom filter is a data structure designed to tell you, rapidly and memory-efficiently, whether an element is present in a set. It is suitable in cases when we need to quickly filter items that are present in a set. HyperLogLog is an approximate technique for computing the number of distinct entries in a set (cardinality). It does this while using only a small amount of memory. For instance, to achieve 99 percent accuracy, it needs only 16 KB. In cases when we need to count distinct elements in a dataset spread across a Hadoop cluster, we could compute the hashes on different machines, build the bit index, and combine the bit index to compute the overall distinct elements. This eliminates the need of moving the data across the network and thus saves us a lot of time. The Count–min sketch is a probabilistic sub-linear space streaming algorithm that can be used to summarize a data stream to obtain the frequency of elements. It allocates a fixed amount of space to store count information, which does not vary over time even as more and more counts are updated. Nevertheless, it is able to provide useful estimated counts, because the accuracy scales with the total sum of all the counts stored. Spark - a faster execution engine Spark is a faster execution engine that provides 10 times the performance over MapReduce when combined with these statistical algorithms. Using Spark with statistical algorithms gives us a huge benefit both in terms of cost and time savings. Spark gets most of its speed by constructing Directed Acyclic Graphs (DAGs) out of the job operations and uses memory to save intermediate data, thus making the reads faster. When using statistical algorithms, saving the hashes in memory makes the algorithms work much faster. Case study Let’s say we have a continuous stream of user log data coming every hour at a rate of 4.4 GB per hour, and we need to analyze the distinct IPs in the logs on a daily basis. At my old company, when MapReduce was used to process the data, it was taking about 6 hours to process one day’s worth of data at a size of 106 GB. We had an AWS cluster consisting of 50 spot instances and 4 on-demand instances running to perform the analysis at a cost of $150 per day. Our system was then shifted to use Spark and HyperLogLog. This shift brought down the cost to $16.50 per day. To summarize, we had a 3.1 TB stream of data processed every month at a cost of $495, which was costing about $4,500 on the original system using MapReduce without the statistical algorithm in place. Further reading In the second part of this two-part blog series, we will discuss two tools in depth: Apache Spark and Apache Pig. We will take a look at how Pig combined with Spark makes existing ETL pipelines 100 times faster, and we will further our understanding of how statistical perspectives positively effect data analytics. About the author Praveen Rachabattuni is a tech lead at Sigmoid Analytics, a company that provides a real-time streaming and ETL framework on Apache Spark. Praveen is also a committer to Apache Pig.
Read more
  • 0
  • 0
  • 6146

article-image-4-gaming-innovations-are-impacting-all-tech
Raka Mahesa
26 Apr 2017
5 min read
Save for later

4 Gaming innovations that are impacting all of tech

Raka Mahesa
26 Apr 2017
5 min read
Video games are a medium that sits at the intersection of entertainment, art, and technology. Considering that video games are a huge industry with over $90 billion in yearly revenues and how the various fields of technology are connected to each other, it makes sense that video games also have an impact on other industries, doesn't it? So let's talk about how gaming has expanded beyond its own industry.  Innovation in hardware  For starters, video games are a big driver in the computer hardware industry. People who mostly use their computer for working with documents, or for browsing the Internet don't really need high-end hardware. A decent processor, an okay amount of RAM, and just a few hundred gigabytes of storage is all they need to have their computers working for them. On the other hand, people who use their computer to play games need high-end hardwareto play the latest games.  These gamers want to play games in the best possible setting, so they demand a GPU that can render their games quickly. This leads to a tight competition between graphic card companies who try their best to produce the most capable GPU at the lowest price possible.  And it's not just GPU. Unlike movies with their 24 frames per second, games can have a higher number of frames per second. Because games with a high number of FPS have better animation, hardware makers have started to produce computer monitors with higher refresh rates that can show more frames per second. They've also produced auxiliary hardware (i.e. keyboards, etc.) that are more sensitive to user input because competitive gamers really appreciate all the extra precision they can get from their hardware.  In short, video games have spurred various innovations in computer hardware technology. And it's simply because those innovations provide users with a better gaming experience.  One of the interesting parts in this aspect is how the progress looks like a loop. When a game developer produces a video game that requires the most advanced hardware, hardware manufacturers then create better hardware that can render a game more efficiently. Then, game developers notice this additional capability and make sure their next game uses this extra resource, and so on. This endless cycle is the fuel that keeps computer hardware progressing. Innovation in AI research and technology Another interesting aspect is how the pursuit of a better GPU has benefitted the research of artificial intelligence. Unlike your usual application, artificial intelligence usually runs their processes in parallel instead of sequentially. Modern day CPU, unfortunately, isn't really constructed to run hundreds of processes at the same time. On the other hand, GPUs are designed to process multiple pixels at the same time, which makes them the perfect hardware to run artificial intelligence.  So, thanks to the progress in GPU technology, you don't need a special workstation to run your artificial intelligence project anymore. You just need to hook an off-the-shelf GPU to your PC and your artificial intelligence is ready to run, making AI research accessible to anyone with a computer. And because video games are a big factor in the progress of graphics hardware, we can say that video games have made an indirect impact in the accessibility of AI technology.  Innovation in virtual reality and augmented reality Another field that video games have made an impact on is virtual and augmented reality. One of the reasons that virtual reality and augmented reality are making a comeback in recent years is because consumer graphics hardware is now powerful enough to run VR apps. As you may know, VR apps require hardware that's more powerful than your usual mainstream computer. Fortunately, gaming computers nowadays are powerful enough to run those VR apps without causing motion sickness. Even Facebook, who isn't really a gaming company, focuses their VR effort on video games because right now the only computer that can run VR properly is a gaming computer.  And it's not just VR and AR. These days, when a new platform is launched, its ability to play video games usually becomes one of its selling points. When AppleTV was launched, its capability to play games was highlighted. Microsoft also had a big showcase using Hololens and Minecraft to demonstrate how the device would work. Video games have become one of the default ways for companies to demonstrate the capabilities of their devices, and to attract more developers to their platform.  Innovation beyond technology  The impact of video games isn't limited to only technological fields. Many have found video games to be an effective teaching and therapeutic tool. For example, soldiers in the army are encouraged to play military shooter games during their off-duty time, so they can stay in a soldier mindset even when they're not on duty. As for therapy, many studies have found that video games can be a great aid in treating patients with traumatic disorders as well as improving autistic patients' social skills.  These fields are just a sample of those that have benefited, and innovated, from the gaming industry. There are still many other fields in which games have made an impact, including: serious games, gamification, simulation, and more.  About the author  Raka Mahesa is a game developer at Chocoarts: http://chocoarts.com/, who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 6025

article-image-biggest-web-developer-salary-and-skills-survey-2015
Packt Publishing
27 Jul 2015
1 min read
Save for later

The biggest web developer salary and skills survey of 2015

Packt Publishing
27 Jul 2015
1 min read
The following infographic is taken from our comprehensive Skill Up IT industry salary reports, with data from over 20,000 developers. Download the full size infographic here.    
Read more
  • 0
  • 0
  • 6023

article-image-atomic-game-engine-how-become-contributor
RaheelHassim
05 Dec 2016
6 min read
Save for later

The Atomic Game Engine: How to Become a Contributor

RaheelHassim
05 Dec 2016
6 min read
What is the Atomic Game Engine? The Atomic Game Engine is a powerful multiplatform game development tool that can be used for both 2D and 3D content. It is layered on top of Urho3D, an open source game development tool, and also makes use of an extensive list of third-party libraries including Duktape, Node.js, Poco, libcurl, and many others. What makes it great? It supports many platforms such as Windows, OSX, Linux, Android, iOS and WebGL. It also has a flexible scripting approach and users can choose to code in C#, JavaScript, TypeScript or C++. There is an extensive library of example games available to all users, which show off different aspects and qualities of the engine. Image taken from: http://atomicgameengine.com/blog/announcement-2/ What makes it even greater for developers? Atomic has recently announced that it is now under the permissive MIT license. Errr great… What exactly does that mean? This means that Atomic is now completely open source and anyone can use it, modify it, publish it, and even sell it as long as the copyright notice remains on all substantial portions of the software. Basically, just don’t remove the text in the picture below from any of the scripts and it should be fine. Here’s what the MIT license in the Atomic Game Engine looks like:   Atomic Game  Engine MIT License Why should I spend time and effort contributing to the Atomic Game Engine? The non-restrictive MIT license makes it easy for developers to freely contribute to the engine and getting creative without the fear of breaking any laws. The Atomic Game Engine acknowledges all of their contributors by publishing their names to the list of developers working on the engine and contributors have access to a very active community where almost all questions are answered and developers are supported. As a junior software developer, I feel I’ve gained invaluable experience by contributing to open source software and it’s also a really nice addition to my portfolio. There is a list of issues available on the GitHub page where the issues have a difficulty level, priority, and issue type labeled. This is wonderful! How do I get started? Contributors can download the MIT Open Source code here: https://github.com/AtomicGameEngine/AtomicGameEngine *Disclaimer: This tutorial is based on using the Windows platform, SmartGit, and Visual Studio Community Version 2015. **Another Disclaimer: I wrote this tutorial with someone like myself in mind. i.e. amazingly average in many ways, but also relatively new to the industry and a first time contributor to open source software. Step 1: Install Visual Studio Community 2015 here. Visual Studio download page Step 2: Install CMake, making sure cmake is on your path. CMake install options Step 3: Fork the Atomic Game Engine’s repository to create your own version of it. a) Go to the AtomicGameEngine GitHub Page and click on the Fork button. This will allow you to experiment and make changes to your own copy of the engine without affecting the original version. Fork the repository b) Navigate to your GitHub profile and click on your forked version of the engine. GitHub profile page with repositories Step 4: Clone the repository and include all of the submodules. a) Click the green Clone or download button on the right and copy the web URL of your repository. Your AGE GitHub page b) Open up SmartGit (or any other Git Client) to clone the repository onto your machine. Clone repository in SmartGit c) Paste the URL you copied earlier into the Repository URL field. Copy remote url d) Include all Submodules and Fetch all Heads and Tags. Include all submodules e) Select a local directory to save the engine. Add a local directory to save the engine on your machine h) Your engine should start cloning... We’ve set everything up for our local repository. Next, we’d like to sync the original AtomicGameEngine with our local version of the engine so that we can always stay up-to-date with any changes made to the original engine. Step 4: Create an upstream branch. a)    Click Remote → Add →                       i)        Add the AtomicGameEngine Remote URL                      ii)        Name it upstream.  Adding an upstream to the original engine We are ready to start building a Visual Studio Solution of the engine. Step 5: Run the CMake_VS2015.bat batch file in the AtomicGameEngine directory. This will generate a new folder in the root directory, which will contain the Atomic.sln for Visual Studio. AGE directory At this point, we can make some changes to the engine (click here for a list of issues). Create a feature branch off the master for Pull Requests. Remember to stick to code conventions already being used. Once you’re happy with the changes you’ve made to the engine: -       Update your branch by merging in upstream. Resolve all conflicts and test it again. -       Commit your changes and Push them up to your branch. It’s now time to send a Pull Request. Step 6: Send a Pull Request. a)    Go to your fork of the AtomicGameEngine repository on GitHub. Select the branch you want to send through, and click New Pull Request. b)    Always remember to reference the Issue Number in your message to make it easier for the creators to manage the Issues List. Personal version of the AGE Your Pull Request will get reviewed by the creators and if the content is acceptable, it will get landed into the engine and you’ll become an official contributor to the Atomic Game Engine! Resources for the Blog: __________________________________________________________________________ [1] The Atomic Game Engine Website [2] Building the Atomic Editor from Source [3] GitHub Help: Fork a Repo [4] What I mean when I use the MIT license About the Author: RaheelHassim is a Software Developer who recently graduated from Wits University in Johannesburg, South Africa. She was awarded the IGDA Women in Games Ambassadors scholarship in 2016 and attended the Games Developers Conference. Her games career started at Luma Interactive where she became a contributor to the Atomic Game Engine. In her free time she binge watches Friends and plays music covers on her guitar.
Read more
  • 0
  • 0
  • 5985
article-image-things-remember-building-your-first-game
Raka Mahesa
07 Aug 2017
6 min read
Save for later

Things to remember before building your first game

Raka Mahesa
07 Aug 2017
6 min read
I was 11 years old when I decided I want to make games. And no, it wasn't because there was a wonderful game that inspired me to make games; the reason was something more childish. You see, Sony Playstation 2 was released just a year earlier and a lot of games were being made for the new console. The 11-year-old me, who only had a Playstation 1, got annoyed because games for PS1 weren't developed anymore. So, out of nowhere, I decided to just make those games myself. Two years after that, I finally built my first game, and after a couple more years, I actually started developing games for a living.  It was childish thinking, but it actually gave me a goal very early in life and helped me to choose a path for me to follow. And while I think my life turned out quite okay, there are still things that I wish the younger me would have known back then. Things that would have helped me a lot back when I was building my first game. And even though I can't go back and tell those things to myself, I can tell them to you. And hopefully, it will help you in your quest to build your first game.  Why do you want to build a game?  Let's start with the most important thing you need to understand when you want to build a game: yourself. Why do you want to build a game? What goal do you hope to achieve by developing a game? There are multiple possible reasons for someone to start creating a game. You could develop a game because you want to learn about a particular programming language or library, or because you want to make a living from selling your own game, or maybe because you have free time and think that building a game is a cool way to pass it.  Whatever it is, it's important for you to know your reasons, because it will help you decide what is actually needed for the game you're building. For example, if you develop a game to learn about programming, then your game doesn't need to use fancy graphics. On the other hand, if you develop a game to commercialize it, then having a visually appealing game is highly important.  One more thing to clarify before we go further. There are two phases that people have before they build their first game. The first one is when someone has the desire to build a game, but has absolutely no idea about how to achieve it, and the other one is when someone has both the desire and knowledge needed to build a game, but hasn't started doing it. Both of them have their own set of problems, so I'll try to address both phases here, starting with the first one.  Learn and master the tools  Naturally, one of the first questions that comes to mind when wanting to create games is how to actually start the creation process. Fortunately, the answer to this question is the same for any kind of creative projects you'll want to attempt: learn and master the tools. For game development, this means game creation tools, and they come in all shapes and sizes. There are those that don't need any kind of programming, like Twine and RPGMaker, those that require a tiny bit of programming like Stencyl and GameMaker, and the professional ones that need a lot of coding like Unity and Unreal Engine. Though, if you want to build games to learn about programming, there's no better way than to just use a programming language with some game-making library,like MonoGame.  With so many tools ready for you to use, will your choice of tools matter? If you're building your first game, then no, the tool that you use will not matter that much. Whilst it's true that you'd probably want to use a tool with better capabilities in the future, at this stage, what's important is learning the actual game development process. And this process is actually the same no matter what tool you use.  KISS: Keep It Simple, Sugar So, now that you know how to build a game, what else do you need to be aware of before you start building it? Well, here's the one thing that most people only realize after they actually start building their game: Game development is hard. For every feature that you want to add into a game, there will be dozens of cases that you have to think about to make sure the feature works fine. And that's why one of the most effective mantras when you're starting game development is KISS: Keep It Simple, Sugar (a change may have been made to the original, slightly more insulting acronym). Are you sure you need to add enemies to your game? Does your game actually need a health bar, or would a health counter be enough? If developing a game is hard, then finishing it is even harder. Keeping the game simple increases your chance of finishing it, and you can always build a more complex game in the future. After all, a released game is better than a perfect, but unreleased game.  That said, it's possible that you're building a game that you've been dreaming of since forever, and you'd never settle for less. If you're hell bent on completing this dream game of yours, who am I to tell you to not pursue it? After all, there are successful games out there that were actually the developer's first game project. If that's how it is, just remember that motivation loss is your biggest enemy and you need to actively combat it. Show your work to other people, or take a break when you've been working on it too much. Just make sure you don't lose your drive to complete your project. It’ll be worth it in the end!  I hope this advice is helpful for you. Good luck with building your games! About the Author  RakaMahesa is a game developer at Chocoarts (http://chocoarts.com/), who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99
Read more
  • 0
  • 0
  • 5977

article-image-points-consider-prepping-data-data-science-project
Amarabha Banerjee
30 Nov 2017
5 min read
Save for later

Points to consider while prepping your data for your data science project

Amarabha Banerjee
30 Nov 2017
5 min read
[box type="note" align="" class="" width=""]In this article by Jen Stirrup & Ruben Oliva Ramos from their book Advanced Analytics with R and Tableau, we shall look at the steps involved in prepping for any data science project taking the example of a data classification project using R and Tableau.[/box] Business Understanding When we are modeling data, it is crucial to keep the original business objectives in mind. These business objectives will direct the subsequent work in the data understanding, preparation and modeling steps, and the final evaluation and selection (after revisiting earlier steps if necessary) of a classification model or models. At later stages, this will help to streamline the project because we will be able to keep the model's performance in line with the original requirement while retaining a focus on ensuring a return on investment from the project. The main business objective is to identify individuals who are higher earners so that they can be targeted by a marketing campaign. For this purpose, we will investigate the data mining of demographic data in order to create a classification model in R. The model will be able to accurately determine whether individuals earn a salary that is above or below $50K per annum. Working with Data In this section, we will use Tableau as a visual data preparation in order to prepare the data for further analysis. Here is a summary of some of the things we will explore: Looking at columns that do not add any value to the model Columns that have so many missing categorical values that they do not predict the outcome reliably Review missing values from the columns The dataset used in this project has 49,000 records. You can see from the files that the data has been divided into a training dataset and a test set. The training dataset contains approximately 32,000 records and the test dataset around 16,000 records. It's helpful to note that there is a column that indicates the salary level or whether it is greater than or less than fifty thousand dollars per annum. This can be called a binomial label, which basically means that it can hold one or two possible values. When we import the data, we can filter for records where no income is specified. There is one record that has a NULL, and we can exclude it. Here is the filter: Let's explore the binomial label in more detail. How many records belong to each label? Let's visualize the finding. Quickly, we can see that 76 percent of the records in the dataset have a class label of <50K. Let's have a browse of the data in Tableau in order to see what the data looks like. From the grid, it's easy to see that there are 14 attributes in total. We can see the characteristics of the data: Seven polynomials: workclass, education, marital-status, occupation, relationship, race, sex, native-country One binomial: sex Six continuous attributes: age, fnlwgt, education-num, capital-gain, capital-loss, hours-per-week From the preceding chart, we can see that nearly 2 percent of the records are missing for one country, and the vast majority of individuals are from the United States. This means that we could consider the native-country feature as a candidate for removal from the model creation because the lack of variation means that it isn't going to add anything interesting to the analysis. Data Exploration We can now visualize the data in boxplots, so we can see the range of the data. In the first example, let's look at the age column, visualized as a boxplot in Tableau: We can see that the values are higher for the age characteristic, and there is a different pattern for each income level. When we look at education, we can also see a difference between the two groups: We can focus on age and education, while discarding other attributes that do not add value, such as native-country. The fnlwgt column does not add value because it is specific to the census collection process.When we visualize the race feature, it's noted that the White value appears for 85 percent of overall cases. This means that it is not likely to add much value to the predictor: Now, we can look at the number of years that people spend in education. When the education number attribute was plotted, then it can be seen that the lower values tend to predominate in the <50K class and the higher levels of time spent in education are higher in the >50K class. We can see this finding in the following figure: This finding may indicate some predictive capability in the education feature. The visualization suggests that there is a difference between both groups since the group that earns over $50K per annum does not appear much in the lower education levels. To summarize, we will focus on age and education as providing some predictive capability in determining the income level.The purpose of the model is to classify people by their earning level. Now that we have visualized the data in Tableau, we can use this information in order to model and analyze the data in R to produce the model. If you liked this article, please be sure to check out Advanced Analytics with R and Tableau which consists of this article and many useful analytics techniques with R and Tableau.
Read more
  • 0
  • 0
  • 5974

article-image-no-patch-human-stupidity-three-social-engineering-techniques-you-might-not-know-about
Sam Wood
12 Aug 2015
5 min read
Save for later

No Patch for Human Stupidity - Three Social Engineering Techniques You Might Not Know About

Sam Wood
12 Aug 2015
5 min read
There's a simple mantra beloved by pentesters and security specialists: "There's no patch for human stupidity!" Whether it's hiding a bunch of Greeks inside a wooden horse to breach the walls of Troy or hiding a worm inside the promise of a sexy picture of Anna Kournikova, the gullibility of our fellow humans has long been one of the most powerful weapons of anyone looking to breach security. In the penetration testing industry, working to exploit that human stupidity and naivety has a name - social engineering. The idea that hacking involves cracking black ICE and de-encrypting the stand-alone protocol by splicing into the mainframe backdoor - all whilst wearing stylish black and pointless goggles - will always hold a special place in our collective imagination. In reality, though, some of the most successful hackers don't just rely on their impressive tech skills, but on their ability to defraud. We're wise to the suspicious and unsolicited phone call from 'Windows Support' telling us that they've detected a problem on our computer and need remote access to fix it. We've cottoned on that Bob Hackerman is not in fact the password inspector who needs to know our login details to make sure they're secure enough. But hackers are getting smarter. Do you think you'd fall for one of these three surprisingly common social engineering techniques? 1. Rogue Access Points - No such thing as a free WiFi You've finally impressed your boss with your great ideas about the future of Wombat Farming. She thinks you've really got a chance to shine - so she's sent you to Wombat International, the biggest convention of Wombat Farmers in the world, to deliver a presentation and drum up some new investors. It's just an hour before you give the biggest speech of your life and you need check the notes you've got saved in the cloud. Helpfully, though, the convention provides free WiFi! Happily, you connect to WomBatNet. 'In order to use this WiFi, you'll need to download our app,' the portal page tells you. Well, it's annoying - but you really need to check your notes! Pressed for time, you start the download. Plot Twist: The app is malware. You've just infected your company computer. The 'free WiFi' is in fact a wireless hotspot set up by a hacker with less-than-noble intentions. You've just fallen victim to a Rogue Access Point attack. 2. The Honeypot - Seduced by Ice Cream You love ice cream - who doesn't? So you get very excited when a man wearing a billboard turns up in front of your office handing out free samples of Ben and Jerry’s. They're all out of Peanut Butter Cup - but it's okay! You've been given a flyer with a QR code that will let you download a Ben and Jerry’s app for the chance to win Free Ice Cream for Life! What a great deal! The minute you're back in the office and linked up to your work WiFi, you start the download. You can almost taste that Peanut Butter Cup. Plot Twist: The app is malware. Like Cold War spies seduced by sexy Russian agents, you've just fallen for the classic honeypot social engineering technique. At least you got a free ice cream out of it, right? 3. Road Apples - Why You Shouldn't Lick Things You Pick Up Off the Street You spy a USB stick, clearly dropped on the sidewalk. It looks quite new - but you pick it up and pop it in your pocket. Later that day, you settle down to see what's on this thing - maybe you can find out who it belongs to and return it to them; maybe you're just curious for the opportunity to take a sneak peek into a small portion of a stranger’s life. You plug the stick into your laptop and open up the first file called 'Government Secrets'... Plot Twist: It's not really much of a twist by now, is it? That USB is crawling with malware - and now it's in your computer. Early today, that pesky band of hackers went on a sowing spree scattering their cheap flash drives all over the streets near your company hoping to net themselves a sucker. Once again, you've fallen victim - this time to the Road Apples attack. What can you do? The reason people keep using social engineering attacks is simple - they work. As humans, we're inclined to be innately trusting - and certainly there are more free hotspots, ice cream apps, and lost USB sticks that are genuine and innocent than ones that are insidious schemes of hackers. There may be no patch for human stupidity, but that doesn't mean you need to be careless - keep your wits about you and remember security rules that you shouldn't break, no matter how innocuous the situation seems. And if you're a pentester or security professional? Keep on social engineering and make your life easy – the chink in almost any organisation’s armour is going to be its people. Find out more about internet security and what we can learn from attacks on the WEP protocol with this article. For more on modern infosec and penetration testing, check out our Pentesting page.
Read more
  • 0
  • 0
  • 5968
article-image-deep-dream-inceptionistic-art-neural-networks
Janu Verma
04 Jan 2017
9 min read
Save for later

Deep Dream: Inceptionistic art from neural networks

Janu Verma
04 Jan 2017
9 min read
The following image, known as dog-slug, was posted on Reddit and was reported to be generated by a convolution neural network. There was a lot of speculation about the validity of such a claim. It was later confirmed that this image was indeed generated by a neural network after Google described the mechanism for generation of such images, they called it deepdream and released their code for anyone to produce these images. This marks the begining of inceptionistic art creation using neural networks. Deep convolution neural networks (CNNs) have been very effective in image recognition problems. A deep neural network has an input layer, where the data is fed into, an output layer, which produces the prediction for each data point, and a lot of layers inbetween. The information moves from one layer to the next. CNNs work by progressively extracting higher-level features from the image at the successive layers of the network. Initial layers detect edges and corners, these features are then fed into next layers which combine them to produce features that make up the image e.g. segments of the image that discern the types of images. The final layer builds a classifier from these features and the output is the most likely category for the image. Deep dream works by reversing this process. An image is fed to the network, which is trained to recognize different categories for the images in the ImageNet dataset which contain 1.2 million images across 1000 categories. As each layer of the network 'learns' features at a different level, we can choose a layer and the output of that layer shows how that layer interprets the input image. The output of this layer is enhanced to produce an inceptionistic-looking picture. Thus a roughly puppy-looking segment of the image becomes super puppy-like. In this post, we will learn how to create inceptionistic images like deep dream using a pre-trained convolution neural network, called VGG (also known as OxfordNet). This network architecture is named after the Visual Geometry Group from Oxford, who developed it. It was used to win the ILSVR (ImageNet) competition in 2014. To this day, it is considered to be an excellent vision model, although it has been somewhat outperformed by more recent advances such as Inception (also known as GoogleNet) used by Google to produce deeo dream images. We will use a library called Keras for our examples. Keras Keras is a high-level library for deep learning, which is built on top of theano and tensorflow. It is written in python, and provides a scikit-learn type API for building neural networks. It enables developers to quickly build neural networks without worrying about the mathematical details of tensor algebra, optimization methods, and numerical methods. Installation Keras has the followinhg dependencies - numpy - scipy - pyyaml - hdf5 (for saving/loading models) - theano (for theano backend) - tensorflow (for tensorflow backend). The easiest way to install keras is using Python Project Index (PyPI): sudo pip install keras Deep dream in Keras The following script is taken from official Keras source code on GitHub. from __future__ import print_function from keras.preprocessing.image import load_img, img_to_array import numpy as np from scipy.misc import imsave from scipy.optimize import fmin_l_bfgs_b import time import argparse from keras.applications import vgg16 from keras import backend as K from keras.layers import Input parser = argparse.ArgumentParser(description='Deep Dreams with Keras.') parser.add_argument('base_image_path', metavar='base', type=str, help='Path to the image to transform.') parser.add_argument('result_prefix', metavar='res_prefix', type=str, help='Prefix for the saved results.') args = parser.parse_args() base_image_path = args.base_image_path result_prefix = args.result_prefix # dimensions of the generated picture img_width = 800 img_height = 800 # path to the model weights file weights_path = 'vgg_weights.h5' # some settings we found interesting saved_settings = { 'bad_trip': {'features':{'block4_conv1': 0.05, 'block4_conv2': 0.01, 'block4_conv3': 0.01}, 'continuity': 0.01, 'dream_l2': 0.8, 'jitter': 5}, 'dreamy': {'features': {'block5_conv1':0.05, 'block5_conv2': 0.02}, 'continuity': 0.1, 'dream_l2': 0.02, 'jitter': 0}, } # the settings we will use in this experiment settings = saved_settings['dreamy'] # print(settings['dream_']) # util function to open, resize and format picturs into appropriate tensors. def preprocess_image(image_path): img = load_img(image_path, target_size=(img_width, img_height)) img = img_to_array(img) img = np.expand_dims(img, axis=0) img = vgg16.preprocess_input(img) return img # util function to convert a tensor into a valid image def deprocess_image(x): if K.image_dim_ordering() == 'th': x = x.reshape((3, img_width, img_height)) x = x.transpose((1,2,0)) else: x = x.reshape((img_width, img_height, 3)) # remove zero-center by mean pixel x[:, :, 0] += 103.939 x[:, :, 1] += 116.779 x[:, :, 2] ++ 123.68 # BGR -> RGB x = x[:, :, ::-1] x = np.clip(x, 0, 255).astype('uint8') return x if K.image_dim_ordering() == 'th': img_size = (3, img_width, img_height) else: img_size = (img_width, img_height, 3) # this will contain our generated image dream = Input(batch_shape=(1,) + img_size) # build the VGG16 network with our placeholder # the model will be loaded with pre-trained ImageNet weights model = vgg16.VGG16(input_tensor=dream, weights='imagenet', include_top=False) print('Model loaded.') # get the symbolic outputs of each "key" layer (we gave them unique names). layer_dict = dict([(layer.name, layer) for layer in model.layers]) # continuity loss util function def continuity_loss(x): assert K.ndim(x) == 4 if K.image_dim_ordering() == 'th': a = K.square(x[:, :, :img_width - 1, :img_height - 1] - x[:, :, 1:, :img_height - 1]) b = K.square(x[:, :, :img_width - 1, :img_height - 1] - x[:, :, :img_width - 1, 1:]) else: a = K.square(x[:, :img_width - 1, :img_height-1, :] - x[:, 1:, :img_height - 1, :]) b = K.square(x[:, :img_width - 1, :img_height-1, :] - x[:, :img_width - 1, 1:, :]) return K.sum(K.pow(a + b, 1.25)) # define the loss loss = K.variable(0.) for layer_name in settings['features']: # add the L2 norm of the features of a layer to the loss assert layer_name in layer_dict.keys(), 'Layer ' + layer_name + ' not found in model.' coeff = settings['features'][layer_name] x = layer_dict[layer_name].output shape = layer_dict[layer_name].output_shape # we avoid border artifacts by only involving non-border pixels in the loss if K.image_dim_ordering() == 'th': loss -= coeff * K.sum(K.square(x[:, :, 2: shape[2] - 2, 2: shape[3] - 2])) / np.prod(shape[1:]) else: loss -= coeff * K.sum(K.square(x[:, 2: shape[1] - 2, 2: shape[2] - 2, :])) / np.prod(shape[1:]) # add continuity loss (gives image local coherence, can result in an artful blur) loss += settings['continuity'] * continuity_loss(dream) / np.prod(img_size) # add image L2 norm to loss (prevents pixels from taking very high values, makes image darker) loss += settings['dream_l2'] * K.sum(K.square(dream)) / np.prod(img_size) # feel free to further modify the loss as you see fit, to achieve new effects... # compute the gradients of the dream wrt the loss grads = K.gradients(loss, dream) outputs = [loss] if type(grads) in {list, tuple}: outputs += grads else: outputs.append(grads) f_outputs = K.function([dream], outputs) def eval_loss_and_grads(x): x = x.reshape((1,) + img_size) outs = f_outputs([x]) loss_value = outs[0] if len(outs[1:]) == 1: grad_values = outs[1].flatten().astype('float64') else: grad_values = np.array(outs[1:]).flatten().astype('float64') return loss_value, grad_values # this Evaluator class makes it possible # to compute loss and gradients in one pass # while retrieving them via two separate functions, # "loss" and "grads". This is done because scipy.optimize # requires separate functions for loss and gradients, # but computing them separately would be inefficient. class Evaluator(object): def __init__(self): self.loss_value = None self.grad_values = None def loss(self, x): assert self.loss_value is None loss_value, grad_values = eval_loss_and_grads(x) self.loss_value = loss_value self.grad_values = grad_values return self.loss_value def grads(self, x): assert self.loss_value is not None grad_values = np.copy(self.grad_values) self.loss_value = None self.grad_values = None return grad_values evaluator = Evaluator() # run scipy-based optimization (L-BFGS) over the pixels of the generated image # so as to minimize the loss x = preprocess_image(base_image_path) for i in range(15): print('Start of iteration', i) start_time = time.time() # add a random jitter to the initial image. This will be reverted at decoding time random_jitter = (settings['jitter'] * 2) * (np.random.random(img_size) - 0.5) x += random_jitter # run L-BFGS for 7 steps x, min_val, info = fmin_l_bfgs_b(evaluator.loss, x.flatten(), fprime=evaluator.grads, maxfun=7) print('Current loss value:', min_val) # decode the dream and save it x = x.reshape(img_size) x -= random_jitter img = deprocess_image(np.copy(x)) fname = result_prefix + '_at_iteration_%d.png' % i imsave(fname, img) end_time = time.time() print('Image saved as', fname) print('Iteration %d completed in %ds' % (i, end_time - start_time)) This script can be run using the following schema - python deep_dream.py path_to_your_base_image.jpg prefix_for_results For example: python deep_dream.py mypic.jpg results Examples I created the following pictures using this script. More examples at Google Inceptionism gallery About the author Janu Verma is a researcher in the IBM T.J. Watson Research Center, New York. His research interests are in mathematics, machine learning, information visualization, computational biology and healthcare analytics. He had held research positions at Cornell University, Kansas State University, Tata Institute of Fundamental Research, Indian Institute of Science, and Indian Statistical Institute. He has written papers for IEEE Vis, KDD, International Conference on healthcare informatics, computer graphics and applications, nature genetics, IEEE sensors journals, and so on. His current focus is on the development of visual analytics systems for prediction and understanding. He advises startups and companies on data science and machine learning in Delhi-NCR area, email to schedule a meeting.
Read more
  • 0
  • 0
  • 5909

article-image-why-phaser-is-a-great-game-development-framework
Alvin Ourrad
17 Jun 2014
5 min read
Save for later

Why Phaser is a Great Game Development Framework

Alvin Ourrad
17 Jun 2014
5 min read
You may have heard about the Phaser framework, which is fast becoming popular and is considered by many to be the best HTML5 game framework out there at the moment. Follow along in this post where I will go into some detail about what makes it so unique. Why Phaser? Phaser is a free open source HTML5 game framework that allows you to make fully fledged 2D games in a browser with little prior knowledge about either game development or JavaScript for designing for a browser in general. It was built and is maintained by a UK-based HTML5 game studio called Photon Storm, directed by Richard Davey, a very well-known flash developer and now full-time HTML5 game developer. His company uses the framework for all of their games, so the framework is updated daily and is thoroughly tested. The fact that the framework is updated daily might sound like a double-edged sword to you, but now that Phaser has reached its 2.0 version, there won't be any changes that break its compatibility, only new features, meaning you can download Phaser and be pretty sure that your code will work in future versions of the framework. Phaser is beginner friendly! One of the main strengths of the framework is its ease of use, and this is probably one of the reasons why it has gained such momentum in such a short amount of time (the framework is just over a year old). In fact, Phaser abstracts away all of the complicated math that is usually required to make a game by providing you with more than just game components.It allows you to skip the part that you spend thinking about how you can implement a given special feature and what level of calculus it requires. With Phaser, everything is simple. For instance, say you want to shoot something using a sprite or the mouse cursor.Whether it is for a space invader or a tower defense game, here is what you would normally have to do to your bullet object (the following example uses pseudo-code and is not tied to any framework): var speed = 50; var vectorX = mouseX - bullet.x; var vectorY = mouseY - bullet.y;   // if you were to shoot a target, not the mouse vectorX = targetSprite.x - bullet.x; vectorY = targetSprite.y - bullet.y;   var angle = Math.atan2(vectorY,vectorX);   bullet.x += Math.cos(angle) * speed;   bullet.y += Math.sin(angle) * speed; With Phaser, here is what you would have to do: var speed = 50; game.physics.arcade.moveToPointer(bullet, speed); // if you were to shoot a target : game.physics.arcade.moveToObject(bullet,target, speed); The fact that the framework was used in a number of games during the latest Ludum Dare (a popular Internet game jam) highly reflects this ease of use.There were about 60 games at Ludum Dare, and you can have a look at themhere. To get started with learning Phaser, take a look here at thePhaser examples, where you’ll find over 350 playable examples. Each example includes a simple demo explaining how to do specific actions with the framework, such as creating particles, using the camera, tweening elements, animating sprites, using the physics engine, and so on. A lot of effort has been put into these examples, and they are all maintained with new ones constantly added by either the creator or the community all the time. Phaser doesn't need any additional dependencies When using a framework, you will usually need an external device library, one for the math and physics calculations, a time management engine, and so on. With Phaser, everything is provided, giving you a very exhaustive device class that you can use to detect the browser's capabilities that is integrated into the framework and is used extensively internally and in games to manage scaling. Yeah, but I don't like the physics engine… Physics engines are usually a major feature in a game framework, and that is a fair point, since physics engines often have their own vocabulary and way of dealing and measuring things. And it's not always easy to switch over from one to another. The physics engines were a really important part of the Phaser 2.0 release. As of today, there are three physics engines fully integrated into Phaser's core, with the possibility to create a custom build of the framework in order to avoid a bloated source code. A physics management module was also created for this release.It dramatically reduces the pain to make your own or an existing physics engine work with the framework. This was the main goal of this feature: make the framework physics-agnostic. Conclusion Photon Storm has put a lot of effort into their framework, and as a result the framework has become widely used by both hobbyists and professional developers. The HTML5 game developers forum is always full of new topics and the community is very helpful as a whole. I hope to see you there.
Read more
  • 0
  • 0
  • 5835
Modal Close icon
Modal Close icon