Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Game Development

93 Articles
article-image-researcher-shares-a-wolfenstein-real-time-ray-tracing-demo-in-webgl1
Bhagyashree R
18 Mar 2019
3 min read
Save for later

Researcher shares a Wolfenstein real-time ray tracing demo in WebGL1

Bhagyashree R
18 Mar 2019
3 min read
Last week, Reinder Nijhoff, a computer vision researcher, created a project that does real-time ray tracing in WebGL1 using Nvidia’s RTX graphics card. This demo was inspired by Metro's real-time global illumination. https://twitter.com/ReinderNijhoff/status/1106193109376008193 This demo uses a hybrid rendering engine created using WebGL1. It renders all the polygons in a frame with the help of traditional rasterization technologies and then combines the result with ray traced shadows, diffuse GI, and reflections. Credits: Reinder Nijhoff What is the ray tracing technique? In computer graphics, ray tracing is a technique for rendering 3D graphics with very complex light interactions. Basically, in this technique, an algorithm traces the path of light and then simulates the way the light will interact with the virtual objects. There are three ways light interacts with the virtual objects: It can be reflected from one object to another causing reflection. It can be blocked by objects causing shadows. It can pass through transparent or semi-transparent objects causing refractions. All these interactions are then combined to determine the final color of a pixel. Ray tracing has been used for offline rendering due to its ability to accurately model the physical behavior of light in the real world. Due to its computationally intensive nature, ray tracing was often not the first choice for real-time rendering. However, this changed with the introduction of Nvidia RTX graphics card as it adds custom acceleration hardware and makes real-time ray tracing relatively straightforward. What was this demo about? The project’s prototype was based on a forward renderer that first draws all the geometry in the scene. Next, the shader used to rasterize (converting an image into pixels) the geometry, calculates the direct lighting. Additionally, the shader also casts random rays from the surface of the rendered geometry to collect the indirect light reflection due to non-shiny surfaces using a ray tracer. The author started with a very simple scene for the prototype that included a single light and rendered only a few spheres and cubes. This made the ray tracing code pretty much straightforward. Once the prototype was complete, he wanted to take the prototype to the next level by adding more geometry and a lot of lights to the scene. Despite the complexity of the environment, Nijhoff wanted to perform ray tracing of the scene in real-time. Generally, to speed up the ray trace process, a bounding volume hierarchy (BVH) is used as an acceleration structure. However, when using WebGL1 shaders it is difficult to pre-calculate and use BVH. This is why Nijhoff decided to use a Wolfenstein 3D level for this demo. To know more in detail, check out the original post shared by Reinder Nijhoff. Unity switches to WebAssembly as the output format for the Unity WebGL build target NVIDIA shows off GeForce RTX, real-time raytracing GPUs, as the holy grail of computer graphics to gamers Introducing SCRIPT-8, an 8-bit JavaScript-based fantasy computer to make retro-looking games  
Read more
  • 0
  • 0
  • 18134

article-image-game-developers-excited-about-unity-2018-2
Amarabha Banerjee
26 Jun 2018
3 min read
Save for later

What’s got game developers excited about Unity 2018.2?

Amarabha Banerjee
26 Jun 2018
3 min read
The undisputed leader of game engines over the last few years has been Unity. It brings .NET professionals and enthusiasts from across the globe under the gaming umbrella with its C# game scripting feature . Unity also boasts of a very active community and an even busier release schedule. Unity has been following a semantic versioning. Under this scheme, version numbers and the way they change convey meaning about the underlying code and what has been modified from one version to the next.  Unity have just released their 2018.2 beta version. Here are some exciting features you can look forward to while working in Unity 2018.2. Texture Mipmap streaming feature: If you are a game developer then saving GPU memory is probably one of your top priorities. Unity 2018.2 gives you control over which graphical map or mipmap you will load in the CPU. The previous versions used to load all the mipmaps at the same time and hence put a huge amount of load on the GPU. While this memory allocation helps in reducing GPU load, it does increase a little bit of CPU load. Improved Package manager: Unity 2018.2 comes with an improved package manager. The improvements are in the UI font, and the status of package label. It also now has the ability to dock the window and provides easy access to both documentation and the list of changes. Improvements in the Particle system: Unity 2018.2 beta comes with an improved particle system and new scripting APIs for baking the geometry of a Particle System into a Mesh. Unity now allows up to eight texture coordinates to be used on meshes and passed to shaders. Particle Systems will also now convert their colors into linear space, when appropriate, before uploading them to the GPU. Camera Improvements: Unity has come up with some major improvements in their camera and and the way it functions and renders the objects in the game to portray them like real life objects. Animation Jobs C# API: Unity 2018.2 has improved the AnimationPlayables by allowing users to write their own C# Playables that can interact directly with the animation data. This allows integration of user made IK solvers, procedural animation or even custom mixers into the current animation system. These features along with some other improvements and bug fixes are sure to help the developers create better and smarter games with the latest Unity 2018.2. To know more on the Unity 2018.2 features, you can visit the official Unity blog. How to use arrays, lists, and dictionaries in Unity for 3D game development Build an ARCore app with Unity from scratch Implementing lighting & camera effects in Unity 2018
Read more
  • 0
  • 0
  • 17924

article-image-adobe-acquires-allegorithmic-a-popular-3d-editing-and-authoring-company
Amrata Joshi
24 Jan 2019
3 min read
Save for later

Adobe Acquires Allegorithmic, a popular 3D editing and authoring company

Amrata Joshi
24 Jan 2019
3 min read
Yesterday, Adobe announced that it has acquired Allegorithmic. Allegorithmic is the creator of Substance, and other 3D editing and authoring tools for gaming, entertainment, and post-production. Allegorithmic’s customer base is diverse ranging across gaming, film and television, e-commerce, retail, automotive, architecture, design and advertising industries. Popular Algorithmic users include Electronic Arts, Ubisoft, Ikea, BMW, Louis Vuitton, Foster + Partners among others. Allegorithmic’s Substance tools are used in games, such as Call of Duty, Assassin’s Creed, and Forza. Allegorithmic’s tools have been used for visual effects and animation for some of the popular movies like Blade Runner 2049, Pacific Rim Uprising, and Tomb Raider. Adobe will help in accelerating Allegorithmic’s product roadmap and go-to-market strategy and further extend its reach among enterprise, SMB, and individual customers. Sebastien Deguy, CEO and founder at Allegorithmic will take up a leadership role as the vice president and handle Adobe’s broader 3D and immersive designs. With this acquisition, Adobe also wants to make Creative Cloud (a set of applications and services from Adobe Systems that gives users access to a collection of software used for graphic design, video editing and more) the home to 3D design tools. How will Creative Cloud benefit from Allegorithmic Adobe and Allegorithmic previously worked together three years ago. As the result of their work, Adobe introduced a standard PBR material for its Project Aero, Adobe Dimension, Adobe Capture, and every 3D element in Adobe Stock. Now, Adobe will empower video game creators, VFX artists, designers, and marketers by combining Allegorithmic’s Substance 3D design tools with Creative Cloud’s imaging, video and motion graphics tools. Creative Cloud can benefit from Allegorithmic’s tools in gaming, entertainment, retail and even for designing textures and materials that give 3D content detail and realism. Creative Cloud tools such as Photoshop, Premiere Pro, Dimension, and After Effects are already in use and are of great significance for content creators, the addition of Allegorithmic’s Substance tools to Creative Cloud would turn out to be more powerful. In a blog post, Scott Belsky, chief product officer and executive vice president at Creative Cloud, said, “Our goal with Creative Cloud is to provide creators with all the tools they need for whatever story they choose to tell. Increasingly, stories are being told with 3D content. That’s why I’m excited to announce that today Adobe has acquired Allegorithmic, the industry standard in tools for 3D material and texture creation for gaming and entertainment.” Sebastien Deguy, said, “Allegorithmic and Adobe share the same passion for bringing inspiring technologies to creators. We are excited to join the team, bring together the strength of Allegorithmic’s industry-leading tools with the Creative Cloud platform and transform the way businesses create powerful, interactive content and experiences.” In future, Adobe might focus on making Allegorithmic tools available via subscription. Some users are concerned about the termination of the perpetual license and are unhappy about this news. It would be interesting to see the next set of updates from the team at Adobe. https://twitter.com/sudokuloco/status/1088101391871107073 https://twitter.com/2017_nonsense/status/1088181496710479872 Adobe set to acquire Marketo putting Adobe Experience Cloud at the heart of all marketing Adobe glides into Augmented Reality with Adobe Aero Adobe to spot fake images using Artificial Intelligence
Read more
  • 0
  • 0
  • 17886

article-image-godot-3-1-released-with-improved-c-support-opengl-es-2-0-renderer-and-much-more
Savia Lobo
15 Mar 2019
4 min read
Save for later

Godot 3.1 released with improved C# support, OpenGL ES 2.0 renderer and much more!

Savia Lobo
15 Mar 2019
4 min read
On 13 March, Wednesday, the Godot developers announced the release of a new version of the open source 2D and 3D cross-platform compatible game engine, Godot 3.1. This new version includes the much-requested improvements to the major release, Godot 3.0. Improved features in the Godot 3.1 OpenGL ES 2.0 renderer Rendering is done entirely on sRGB color space (the GLES3 renderer uses linear color space). This is much more efficient and compatible, but it means that HDR will not be supported. Some advanced PBR features such as subsurface scattering are not supported. Unsupported features will not be visible when editing materials. Some shader features will not work and throw an error when used. Also, some post-processing effects are not present either. Unsupported features will not be visible when editing environments. GPU-based Particles will not work as there is no transform feedback support. Users can use the new CPUParticles node instead. Optional typing in GDScript This has been one of the most requested Godot features from day one. GDScript allows to write code in a quick way within a controlled environment. The code editor will now show which lines are safe with a slight highlight of the line number. This will be vital in the future to optimize small pieces of code which may require more performance. Revamped Inspector The Godot inspector has been rewritten from scratch. It includes features such as proper vector field editing, sub-inspectors for resource editing, better custom visual editors for many types of objects, very comfortable to use spin-slider controls, better array and dictionary editing and many more features. Kinematicbody2d (and 3d) improvements Kinematic bodies are among Godot's most useful nodes. They allow creating very game-like character motion with little effort. For Godot 3.1 they have been considerably improved with: Support for snapping the body to the floor. Support for RayCast shapes in kinematic bodies. Support for synchronizing kinematic movement to physics, avoiding a one-frame delay. New Axis Handling system Godot 3.1 uses the novel concept of "action strength". This approach allows using actions for all use cases and it makes it very easy to create in-game customizable mappings and customization screens. Visual Shader Editor This was a pending feature to re-implement in Godot 3.0, but it couldn't be done in time back then. The new version has new features such as PBR outputs, port previews, and easier to use mapping to inputs. 2D Meshes Godot now supports 2D meshes, which can be used from code or converted from sprites to avoid drawing large transparent areas. 2D Skeletons It is now possible to create 2D skeletons with the new Skeleton2D and Bone2D nodes. Additionally, Polygon2D vertices can be assigned bones and weight painted. Adding internal vertices for better deformation is also supported. Constructive Solid Geometry (CSG) CSG tools have been added for fast level prototyping, allowing generic primitives and custom meshes to be combined via boolean operations to generate more complex shapes. They can also become colliders to test together with physics. CPU-based particle system Godot 3.0 integrated a GPU-based particle system, which allows emitting millions of particles at little performance cost. The developers added alternative CPUParticles and CPUParticles2D nodes that perform particle processing using the CPU (and draw using the MultiMesh API). These nodes open the window for adding features such as physics interaction, sub-emitters or manual emission, which are not possible using the GPU. More VCS-friendly The new 3.1 version includes some very requested enhancements such as: Folded properties are no longer saved in scenes. This avoids unnecessary history pollution. Non-modified properties are no longer saved. This reduces text files considerably and makes history even more readable. Improved C# support In Godot 3.1, C# projects can be exported to Linux, macOS, and Windows. Support for Android, iOS, and HTML5 will come soon. To know about other improvements in detail, visit the changelog or the official website. Microsoft announces Game stack with Xbox Live integration to Android and iOS OpenAI introduces Neural MMO, a multiagent game environment for reinforcement learning agents Google teases a game streaming service set for Game Developers Conference
Read more
  • 0
  • 0
  • 17838

article-image-fortnite-creator-epic-games-launch-epic-games-store-where-developers-get-88-of-revenue-earned-challenging-valves-dominance
Sugandha Lahoti
05 Dec 2018
3 min read
Save for later

Fortnite creator Epic games launch Epic games store where developers get 88% of revenue earned; challenging Valve’s dominance

Sugandha Lahoti
05 Dec 2018
3 min read
The Game studio, who brought the phenomenal online video game Fortnite to life, has launched an Epic games store. In a blog post on the Unreal Engine website, Epic stated that the store will have a “fair economics and a direct relationship with players”. All players who buy a game will be subscribed to a developer’s newsfeed where they can contact them for updates and news about upcoming releases. Developers can also control their game pages and connect with YouTube content creators, Twitch streamers, and bloggers with the recently launched Support-A-Creator program. Epic games store will also follow an 88/12 revenue split. “Developers receive 88% of revenue,” the company wrote. “There are no tiers or thresholds. Epic takes 12%. And if you’re using Unreal Engine, Epic will cover the 5% engine royalty for sales on the Epic Games store, out of Epic’s 12%.” Source: Unreal Engine Epic’s inspiration for the 88/12 split may have possibly been taken from Valve’s Steam store (a major competitor to Epic games) who have tweaked their revenue making process. “Starting from October 1, 2018, when a game makes over $10 million on Steam, the revenue share for that application will adjust to 75 percent/25 percent on earnings beyond $10 million,” Valve wrote in the official blog post. “At $50 million, the revenue share will adjust to 80 percent/20 percent on earnings beyond $50 million. The Epic game store with launch with a few selected games on PC and Mac, then it will open up to other games and to Android and other open platforms throughout 2019. With this move, Epic Games are looking to attract more gamers and developers to their platform. And a better revenue split will automatically do most of the work for them. Developer-favour revenue splitting will also increase the market where previously there was a lack of competition in PC-game distribution by the immovable 30/70 split. Twitteratis were fairly happy with this announcement and expressed their feelings and agreed on it to being a threat to Valve. https://twitter.com/Grummz/status/1069975572984385537 https://twitter.com/SpaceLyon/status/1069979966501208065 https://twitter.com/lucasmtny/status/1069970212424953857 https://twitter.com/nickchester/status/1069970684112265217 The Epic Games team will reveal more details on upcoming game releases at the Game Awards this Thursday. Read the blog post by Epic games to know more. Epic games CEO calls Google “irresponsible” for disclosing the security flaw in Fortnite Android Installer before patch was ready Google is missing out $50 million because of Fortnite’s decision to bypass Play Store Implementing fuzzy logic to bring AI characters alive in Unity based 3D games
Read more
  • 0
  • 0
  • 17219

article-image-game-rivals-microsoft-and-sony-form-a-surprising-cloud-gaming-and-ai-partnership
Sugandha Lahoti
17 May 2019
3 min read
Save for later

Game rivals, Microsoft and Sony, form a surprising cloud gaming and AI partnership

Sugandha Lahoti
17 May 2019
3 min read
Microsoft and Sony have been fierce rivals when it comes to gaming starting from 2001 when Microsoft’s Xbox challenged the Sony PlayStation 2. However, in an unusual announcement yesterday, Microsoft and Sony signed a memorandum of understanding to jointly explore the development of future cloud solutions in Microsoft Azure to support their respective game and content-streaming services. Sony and Microsoft will also explore collaboration in the areas of semiconductors and AI. For semiconductors, they will jointly develop new intelligent image sensor solutions.  In terms of AI, the parties will incorporate Microsoft’s AI platform and tools in Sony’s consumer products. Microsoft in a statement said,  “these efforts will also include building better development platforms for the content creator community,” seemingly stating that both companies will probably partner on future services aimed at creators and the gaming community. Rivals turned to Allies Sony’s decision to keep aside the rivalry and partner with Microsoft makes sense because of two main reasons. First, cloud streaming is considered the next big thing in gaming. Only three companies Microsoft, Google, and Amazon have enough cloud experience to present viable, modern cloud infrastructure. Although Sony has enough technical competence to build its own cloud streaming service, it makes sense to deploy via Microsoft’s Azure than scaling its own distribution systems. Microsoft is also happy to extend a welcoming hand to a customer as large as Sony. Moreover, neither Sony nor Microsoft is going to commit to focus on game streaming completely, as both already have consoles currently in development. This is unlike Amazon and Google, who are going to go full throttle in building game streaming. It makes sense that Sony chose to go with Microsoft putting enough resources into these efforts, and going so far as to collaborate, showing that they understand game streaming is something not to be looked down on for not having. Second, this partnership is also likely a direct competition to Google’s Stadia game streaming service, unveiled at Game Developers Conference 2019. Stadia is a cloud-based game streaming platform that aims to bring together, gamers, YouTube broadcasters, and game developers “to create a new experience”. The games get streamed from any data center to any device that can connect to the internet like TV, laptop, desktop, tablet, or mobile phone. Gamers can access their games anytime and virtually on any screen. And, game developers will be able to use nearly unlimited resources for developing games. Since all the graphics processing happens on off-site hardware, there will be little stress on your local hardware. “Sony has always been a leader in both entertainment and technology, and the collaboration we announced today builds on this history of innovation,” says Microsoft CEO Satya Nadella. “Our partnership brings the power of Azure and Azure AI to Sony to deliver new gaming and entertainment experiences for customers.” Twitter was filled with funny memes on this alliance and its direct contest with Stadia. https://twitter.com/MikieDaytona/status/1129076134950445056 https://twitter.com/shaunlabrie/status/1129144724646813696 https://twitter.com/kettleotea/status/1129142682004205569 Going forward, the two companies will share additional information when available. Read the official announcement here. Google announces Stadia, a cloud-based game streaming service, at GDC 2019 Microsoft announces Project xCloud, a new Xbox game streaming service Amazon is reportedly building a video game streaming service, says Information
Read more
  • 0
  • 0
  • 17190
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-nvidia-launches-geforce-nows-gfn-recommended-router-program-to-enhance-the-overall-performance-and-experience-of-gfn
Natasha Mathur
24 Dec 2018
2 min read
Save for later

NVIDIA launches GeForce Now’s (GFN) 'recommended router' program to enhance the overall performance and experience of GFN

Natasha Mathur
24 Dec 2018
2 min read
NVIDIA launched a ‘recommended router’ program last week to improve the overall experience of its GeForce Now (GFN) cloud gaming service for PC and Mac. The GeForce NOW game-streaming service has transformed the user experience when it comes to playing high-performance games. NVIDIA has now come out with a few enhancements in beta mode to improve the quality of its service, using its ‘recommended router program’. The recommended router program comes comprises the latest generation routers for cloud-gaming in the home for video streaming and downloading. These routers enable the users to configure its settings in a way that it prioritizes GeForce NOW over all the other data. Recommended routers get certified as “factory-enabled” with a GeForce NOW “quality of service (QoS) profile” that makes sure that your cloud game playing is at its best quality. The router settings get automatically loaded once the GeForce Now launches. Network latency which is the biggest drawback on cloud gaming is quite low with these routers and also better streaming speeds are offered for GeForce NOW. “We’re working closely with ASUS, D-LINK, Netgear, Razer, TP-Link, Ubiquiti Networks and other router manufacturers to build GeForce NOW recommended routers. They’re committed to building best-in-class cloud gaming routers — just as we’re committed to delivering best-in-class gaming experiences,” says the NVIDIA team. GFN recommended routers are now available in the U.S. and Canada starting with Amplifi HD Gamer’s Edition by Ubiquiti Networks. Amplifi makes use of multiple self-configuring radios and advanced antenna technology that helps it deliver a powerful, whole-home Wi-Fi coverage. For more information, read the official NVIDIA blog. NVIDIA demos a style-based generative adversarial network that can generate extremely realistic images; has ML community enthralled NVIDIA makes its new “brain for autonomous AI machines”, Jetson AGX Xavier Module, available for purchase NVIDIA open sources its game physics simulation engine, PhysX, and unveils PhysX SDK 4.0
Read more
  • 0
  • 0
  • 16937

article-image-corona-labs-open-sources-corona-its-free-and-cross-platform-2d-game-engine
Natasha Mathur
03 Jan 2019
3 min read
Save for later

Corona Labs open sources Corona, its free and cross-platform 2D game engine

Natasha Mathur
03 Jan 2019
3 min read
Corona Labs announced yesterday that it’s making its free and cross-platform 2D game engine, Corona, available as open source under the GPLv3 license and commercial licenses. The license for builds and releases remains unchanged and the change applies only to the source code of the engine. Corona is a popular game engine for creating 2D games and apps for mobile, desktop systems, TV platforms, and the web. It is based on Lua language and makes use of over 1,000 built-in APIs and plugins, and Corona Native extensions (C/C++/Obj-C/Java). According to Vlad Sherban, product manager for Corona Labs, the Corona team had been discussing making Corona open source ever since it got acquired by Appodeal, back in 2017. “We believe that this move will bring transparency to the development process, and will allow users to contribute features or bug fixes to make the project better for everyone,” said Sherban. The team also mentions that transitioning to open source would help them respond quickly to market shifts and changes. It would also ensure that Corona stays relevant at all times for all mobile app developers. Moreover, now that Corona is open source, it will bring more visibility to the development process by letting users see what the engine team is working on and where the project is going. It will also offer extra benefits for businesses as they will be able to acquire a commercial license for source code and customize the engine for certain commercial projects. Additionally, Corona Labs won’t be collecting any statistics from apps built with daily build 2018.3454 or later. When Corona Labs was a closed source product, it used to collect basic app usage stats such as the number of sessions, daily average users, etc. With Corona available as open source now, there is no need to collect this data. “Powered by the new open source model and supported by the development of new features and bug fixes will make Corona more community driven — but not without our help and guidance --- going open source will provide confidence in the future of the engine and an opportunity to grow community involvement in engine development,” said Sherban. NVIDIA open sources its game physics simulation engine, PhysX, and unveils PhysX SDK 4.0 Microsoft open sources Trill, a streaming engine that employs algorithms to process “a trillion events per day” Facebook contributes to MLPerf and open sources Mask R-CNN2Go, its CV framework for embedded and mobile devices
Read more
  • 0
  • 0
  • 16797

article-image-openai-introduces-neural-mmo-a-multiagent-game-environment-for-reinforcement-learning-agents
Amrata Joshi
06 Mar 2019
3 min read
Save for later

OpenAI introduces Neural MMO, a multiagent game environment for reinforcement learning agents

Amrata Joshi
06 Mar 2019
3 min read
On Monday, the team at OpenAI launched at Neural MMO (Massively Multiplayer Online Games), a multiagent game environment for reinforcement learning agents. It will be used for training AI in complex, open-world environments. This platform supports a large number of agents within a persistent and open-ended task. The need for Neural MMO Since the past few years, the suitability of MMOs for modeling real-life events has been explored. But there are two main challenges for multiagent reinforcement learning. Firstly, there is a need to create open-ended tasks with high complexity ceiling as the current environments are complex and narrow. The other challenge, the OpenAI team specifies is the need for more benchmark environments in order to quantify learning progress in the presence of large population scales. Different criteria to overcome challenges The team suggests certain criteria which need to be met by the environment to overcome the challenges. Persistence Agents can concurrently learn in the presence of other learning agents without the need of environment resets. The strategies should adapt to rapid changes in the behaviors of other agents and also consider long time horizons. Scale Neural MMO supports a large and variable number of entities. The experiments by the OpenAI team consider up to 100M lifetimes of 128 concurrent agents in each of 100 concurrent servers. Efficiency As the computational barrier to entry is low, effective policies can be trained on a single desktop CPU. Expansion The Neural MMO is designed to update new content. The core features include food and water foraging system, procedural generation of tile-based terrain, and a strategic combat system. There are opportunities for open-source driven expansion in the future. The Environment Players can join any available server while each containing an automatically generated tile-based game map of configurable size. Some tiles are traversable, such as food-bearing forest tiles and grass tiles, while others, such as water and solid stone, are not. Players are required to obtain food and water and avoid combat damage from other agents, in order to sustain their health. The platform comes with a procedural environment generator and visualization tools for map tile visitation distribution, value functions, and agent-agent dependencies of learned policies. The team has trained a fully connected architecture using vanilla policy gradients, with a value function baseline and reward discounting as the only enhancements. The team has converted variable length observations, such as the list of surrounding players, into a single length vector by computing the maximum across all players. Neural MMO has resolved a couple of limitations of previous game-based environments, but there are still many left unsolved. Few users are excited about this news. One of the users commented on HackerNews, “What I find interesting about this is that the agents naturally become pacifists.” While a few others think that the company should come up with novel ideas and not copied ones. Another user commented on HackerNews, “So far, they are replicating known results from evolutionary game theory (pacifism & niches) to economics (distance & diversification). I wonder when and if they will surprise some novel results.” To know more about this news, check out OpenAI’s official blog post. AI Village shares its perspective on OpenAI’s decision to release a limited version of GPT-2 OpenAI team publishes a paper arguing that long term AI safety research needs social scientists OpenAI’s new versatile AI model, GPT-2 can efficiently write convincing fake news from just a few words
Read more
  • 0
  • 0
  • 16630

article-image-unity-has-launched-the-obstacle-tower-challenge-to-test-ai-game-players
Sugandha Lahoti
29 Jan 2019
2 min read
Save for later

Unity has launched the ‘Obstacle Tower Challenge’ to test AI game players

Sugandha Lahoti
29 Jan 2019
2 min read
Unity has announced a video game challenge, the Obstacle tower challenge which will test the vision, control, planning, and generalization capabilities of AI software. The Obstacle Tower Challenge will use a game-like environment of platform-style gameplay with puzzles and planning problems inside a tower setting maneuvering almost 100 floors. The challenge will examine how an AI software performs in computer vision, locomotion skills, and high-level planning. The challenge will begin on Monday, February 11 and will run through Friday, May 24. As the challenge opens, participants can review all the rules and regulations, download the Starter Kit and begin training their agents. Round 1, which will run from February 11 to March 31, will have participants playing up to Floor 25 of the Obstacle Tower. The winners will proceed to round 2 which will have 100 floors, post which the winners will be announced June 14. The participants will have the opportunity to win prizes in the form of cash, travel vouchers, and Google Cloud Platform credits, valued at over $100,000. “Each of the Tower floors are procedurally-generated, which means an AI agent must not only be able to solve a single version of the Tower but any arbitrary version as well. In this way, we’re testing the generalization ability of agents, a key capability that has not often been analyzed by benchmarks in the past.” said Danny Lange, Vice President of AI and Machine Learning, Unity Technologies. The end goal of this challenge is to bring up new AI research and solve new problems in reinforcement learning.” AI has been making great progress in conquering high-profile games. Recently, Google DeepMind’s AI AlphaStar defeated StarCraft II pros TLO and MaNa and won 10-1 against the gamers. Unity updates its TOS, developers can now use any third party service that integrate into Unity. Improbable says Unity blocked SpatialOS; Unity responds saying it has shut down Improbable and not SpatialOS. Unity and Baidu collaborate for simulating the development of autonomous vehicles
Read more
  • 0
  • 0
  • 16343
article-image-unity-benchmark-report-webassembly-performance-in-browsers
Sugandha Lahoti
18 Sep 2018
2 min read
Save for later

Unity Benchmark report approves WebAssembly load times and performance in popular web browsers

Sugandha Lahoti
18 Sep 2018
2 min read
Unity has released a benchmarking report after two years since the last Unity Benchmark report comparing the performance and load times of WebAssembly with asm.js. They have compared the performance of Unity WebGL in four major web browsers: Firefox 61, Chrome 70, Safari 11.1.2 and Edge 17.  Last month, Unity officially announced that it is finally making the switch to WebAssembly as their output format for the Unity WebGL build target. Note: All images and graphs are taken from the Unity Blog. For running the tests, the team rebuilt the Benchmark project with Unity 2018.2.5f1 using the following Unity WebGL Player Settings: Here are the findings from the report. Criteria 1: Total amount of time taken to get to the main screen for both WebAssembly and asm.js. Findings: Firefox is comparatively fast to load on both Windows and macOS Chrome and Edge load faster when using WebAssembly All browsers, except Safari, load faster with WebAssembly compared to asm.js. Criteria 2: In-Depth Load Times for WebAssembly-only. The team compared four factors: WebAssembly compilation and instantiation. Unity engine initialization and first scene load. Time it takes to render first frame. Time it takes to load and have a stable frame-rate. Findings: Firefox is the fastest overall on both Windows and Mac Edge compiles Wasm quickly (even faster than Firefox) but is slower in Unity engine initialization. Criteria 3: Performance and Load times for Real-World Projects Real-world projects result in larger builds which impact the end-user’s experience. Here is an overview of total scores using WebAssembly and asm.js Findings: All browsers perform better when using WebAssembly On Windows, all browsers perform similarly On macOS, Firefox outperforms all other browsers. Safari is the browser that benefits the most by WebAssembly since it doesn’t support asm.js optimizations. Conclusion The report findings conclude that modern browsers load faster and perform better thanks to WebAssembly. It also provides more consistent user experience as compared to asm.js. Read more about the findings on the Unity Blog. Unity releases ML-Agents toolkit v0.5 with Gym interface, a new suite of learning environments. Key Takeaways from the Unity Game Studio Report 2018. Unity switches to WebAssembly as the output format for the Unity WebGL build target.  
Read more
  • 0
  • 0
  • 16297

article-image-electronic-arts-ea-announces-project-atlas-a-futuristic-cloud-based-ai-powered-game-development-platform
Natasha Mathur
02 Nov 2018
4 min read
Save for later

Electronic Arts (EA) announces Project Atlas, a futuristic cloud-based AI powered game development platform

Natasha Mathur
02 Nov 2018
4 min read
Electronic Arts (EA) announced Project Atlas, a new AI-powered, cloud computing based game development platform, earlier this week. Project Atlas comes with high-quality LIDAR Data, improved scalability, cloud-based engine, and enhanced security among others. Information regarding when the general availability of Project Atlas hasn’t been disclosed yet. “We’re calling this Project Atlas and we believe in it so much that we have over 1,000 EA employees working on building it every day, and dozens of studios around the world contributing their innovations, driving priorities, and already using many of the components” mentioned Ken Moss, Chief Technology Officer at Electronic Arts. Let’s discuss the features of Project Atlas. High-quality LIDAR Data Project Atlas will be using high-quality LIDAR data about real mountain ranges. This data will then be further passed through a deep neural network which has been trained to create terrain-building algorithms. With the help of this AI-assisted terrain generation, designers will be able to generate not just a single mountain, but a series of mountains along with the surrounding environment to bring the realism of the real world. “This is just one example of dozens or even hundreds where we can apply advanced technology to help game teams of all sizes scale to build bigger and more fun games,” says Moss. Improved Scalability Earlier, all simulation or rendering of in-game actions used to be limited to either the processing performance of the player’s console or to a single server that would interact with your system. But, now, with the help of the cloud, players will have the ability to tap into a network of many servers, that are dedicated to computing complex tasks. This will deliver hyper-realistic destruction within new HD games, that would be highly indistinguishable from real life. ”We’re working to deploy that level of gaming immersion on every device”, says Moss. Moreover, the integration of distributed networks at the rendering level means infinite scalability from the cloud. So, whether you’re on a team of 500 or just 5, you’ll now be able to scale games and create immersive experiences, in unprecedented ways. Cloud-based engine and Moddable asset database Now with Project Atlas, you can turn your own vision into reality, and share the creation with your friends as well as the whole world. You can also market your ideas and visions to the community. Keeping this in mind, Project Atlas team is planning on having a cloud-enabled engine that’ll be able to seamlessly integrates different services. Along with a moddable asset database, there’ll also be a common marketplace so that users can for share and rate other players’ creations. “Players and developers want to create. We want to help them. By blurring the line between content producers and players, this will truly democratize the game experience” adds Moss. Enhanced Security Project Atlas comes with a unified platform, where game makers have the ability to seamlessly deploy security measures such as SSL certificates, configuration, appropriate encryption of data, and zero-downtime patches for every feature from a single secure source. This will allow them to focus more on creating games and less on taking the required security measures. “We’re solving for some of the manually intensive demands by bringing together AI capabilities in an engine and cloud-enabled services at scale. With an integrated platform that delivers consistency and seamless delivery from the game, game makers will free up time, brainspace, and energy for the creative pursuit”, says Moss. For more information, check out the official Project Atlas blog. Xenko 3.0 game engine is here, now free and open-source Meet yuzu – an experimental emulator for the Nintendo Switch AI for Unity game developers: How to emulate real-world senses in your NPC agent behavior
Read more
  • 0
  • 0
  • 16163

article-image-google-i-o-2019-d1-highlights-smarter-display-search-feature-with-ar-capabilities-android-q-linguistically-advanced-google-lens-and-more
Fatema Patrawala
09 May 2019
11 min read
Save for later

Google I/O 2019 D1 highlights: smarter display, search feature with AR capabilities, Android Q, linguistically advanced Google lens and more

Fatema Patrawala
09 May 2019
11 min read
This year’s Google IO 2019 was meant to be big, and it didn't disappoint at all. There's a lot of big news to talk about as it introduced and showcased exciting new products, updates, features and functionalities for a much better user experience. Google I/O kicked off yesterday and it will run through Thursday May 9 at the Shoreline Amphitheater in Mountain View, California. It has approximately 7000 attendees from all around the world. “To organize the world’s information and make it universally accessible and useful. We are moving from a company that helps you find answers to a company that helps you get things done. Our goal is to build a more helpful Google for everyone.” Sundar Pichai, Google CEO commenced his Keynote session with such strong statements. He further listed a few recent tech advances and said “We continue to believe that the biggest breakthroughs happen at the intersection of AI.” He then went on to discuss how Google is confident that it can do more AI without private data leaving your devices, and that the heart of the solution will be federated learning. Basically, federated learning is a distributed machine learning approach which enables model training on a large corpus of decentralized data. It enables mobile phones at different geographical locations to collaboratively train a machine learning model without transferring any data that may contain personal information from the devices. While the keynote lasted for nearly two hours, some of the breakthrough innovation in tech were introduced which will be briefed in detail moving ahead in the article. Google Search at Google IO 2019 Google remains a search giant, and that's something it has not forgotten at Google IO 2019. However, search is about to become far more visually rich, thanks to the inclusion of AR camera trick which is now introduced directly into search results. They held an on-stage demonstration at Google IO which showed how a medical student could search for a muscle group, and be presented within mobile search results with a 3D representation of the body part. Not only could it be played with within the search results, it could be placed on the user’s desk to be seen at real scale from their smartphone’s screen. Source: Google And even larger things, like an AR shark, could be put into your AR screen, straight from the app. The Google team showcased this feature as the shark virtually appeared live in front of the audience. Google Lens bill splitting and food recommendations Google Lens was something which caught audience’s interest in the Google's App arsenal. Lens was presented in a way that it can use image recognition to deliver information based on what your camera is looking at. A demo was shown on how a combination of mapping data and image recognition will let Google Lens make recommendations from a restaurant’s menu, just by pointing your camera at it. And when the bill arrives, point your camera at the receipt and it'll show you tipping info and bill splitting help. They also announced their partnership with recipe providers to allow Lens to produce video tutorials when your phone is pointed at a written recipe. Source: Google Debut of Android Q beta 3 version At Google IO Android Q beta 3 was introduced, it is the 10th generation of the Android operating system, and it comes with new features for phone and tablet users. Google announced that there are over 2.5 billion active Android devices as the software extends to televisions, in-car systems and smart screens like the Google Home Hub. Further it was discussed how the Android will work with foldable devices, and how it will be able to seamlessly tweak its UI depending on the format and ratio of the folding device. Another new feature of live caption system in Android Q will turn audio instantly into text to be read. It's a system function triggered within the volume rocker menu. They can be tweaked for legibility to your eyes, doesn't require an internet connection, and happens on videos that have never been manually close-captioned. It's at an OS level, letting it work across all your apps. Source: Google The smart reply feature will now work across all messaging apps in Android Q, with the OS smartly predicting your text. The Dark Theme activated by battery saver or the quick tile was introduced. Lighting up less pixels in your phone will save its battery life. Android Q will also double down on security and privacy features, such as a Maps incognito mode, reminders for location usage and sharing (such as only when a delivery app is in use), and TLSV3 encryption for low end devices. Security updates will roll out faster too, updating over the air without a reboot needed for the device. With Android Q Beta 3 which will be launched today on 21 new devices, Google also announced that it will make Kotlin, a statically typed programming language for writing its Android apps. Chrome to be more transparent in terms of cookie control Google announced that it will update Chrome to provide users with more transparency about how sites are using cookies, as well as simpler controls for cross-site cookies. A number of changes will be made to Chrome to enable features, like modifying how cookies work so that developers need to explicitly specify which cookies are allowed to work across websites — and could be used to track users. The mechanism is built on the web's SameSite cookie attribute and you can find the technical details on web.dev. In the coming months, Chrome will require developers to use this mechanism to access their cookies across sites. This change will enable users to clear all such cookies while leaving single domain cookies unaffected, preserving user logins and settings. It will also enable browsers to provide clear information about which sites are setting these cookies, so users can make informed choices about how their data is used. This change also has a significant security benefit for users, protecting cookies from cross-site injection and data disclosure attacks like Spectre and CSRF by default. They further announced that they will eventually limit cross-site cookies to HTTPS connections, providing additional important privacy protections for our users. Developers can start to test their sites and see how these changes will affect behavior in the latest developer build of Chrome. They have also announced Flutter for web, mobile and desktop. It will allow web-based applications to be built using the Flutter framework. The core framework for mobile devices will be upgraded to Flutter 1.5. And for the desktop, Flutter will be used as an experimental project. “We believe these changes will help improve user privacy and security on the web — but we know that it will take time. We’re committed to working with the web ecosystem to understand how Chrome can continue to support these positive use cases and to build a better web.” says Ben Galbraith - Director, Chrome Product Management and Justin Schuh - Director, Chrome Engineering Next generation Google Assistant Google has been working hard to compress and streamline the AI that Google Assistant taps into from the cloud when it is processing voice commands. Currently every voice request has to be run through three separate processing models to land on the correctly-understood voice command. It is only data that until now has taken up 100GB of storage on many Google servers. But that's about to change. As Google has figured how to shrink that down to just 500MB of storage space, and to put it on your device. This will help lower the latency between your voice request and the task you've wished to trigger being carried out. It's 10x faster - 'real time', according to Google. It also showed a demo where, a Google rep fired off a string of voice commands that required Google Assistant to access multiple apps, execute specific actions, and understand not only what the rep was saying, but what she actually meant. For example she said, “Hey Google, what’s the weather today? What about tomorrow? Show me John Legend on Twitter; Get a Lyft ride to my hotel; turn the flashlight on; turn it off; take a selfie.” Assistant executed the whole sequence flawlessly, in a span of about 15 seconds. Source: Google Further demos showed off its ability to compose texts and emails that drew on information about the user’s travel plans, traffic conditions, and photos. And last but not the least it can also silence your alarms and timers by just saying 'Stop' to help you go back to your slumber. Google Duplex gets smarter Google Duplex is a Google Assistant service which earlier use to make calls and bookings on your behalf based on the requests. But now It's getting more smarter as it comes with the new 'Duplex on the web' feature. Now you can ask Google Duplex to plan a trip, and it'll begin filling in website forms such as reservation details, hire car bookings and more, on your behalf. And it only awaits you to confirm the details it has input. Google Home Hub is dead, Long live the Nest Hub Max At Google IO, the company announced it was dropping the Google Home moniker, instead rebranding its devices with the Nest name, bringing them in line with its security systems. The Nest Hub Max was introduced, with a camera and larger 10-inch display. With a built-in Nest Cam wide-angle lens security camera (127 degrees), which the original Home Hub omitted due to privacy concerns, it's now a far more security-focussed device. It also lets you make video calls using a wide range of video calling apps. Cameras and mics can be physically switched off with a slider that cuts off the electronics, for the privacy-conscious. Source: Google Voice and Face match features, allowing families to create voice and face models, will let the Hub Max know to only show an individual's information or recommendations. It'll also double up as a kitchen TV, if you've access to YouTube TV plans, and lowering the volume is as simple as raising your hand in front of the display. It'll be launched this summer for $229 in the US, and AU$349 in Australia. The original Hub also gets a price cut to $129 / AU$199. Other honorable mentions Google Stadia: Google had introduced its new game-streaming service, called Stadia in March. The service uses Google’s own servers to store and run games, which you can then connect to and play whenever you’d like on literally any screen in your house including your desktop, laptop, TV, phone and tablet. Basically, if it’s internet-connected and has access to Chrome, it can run Stadia. Today at I/O they announced that Stadia will not only stream games from the cloud to the Chrome browser but on the Chromecast, and other Pixel and Android devices. They plan to launch ahead this year in the US, Canada, UK, and Europe. A cheaper Pixel phone: While other smartphones are getting more competitive in terms of pricing, Google introduced its new Pixel 3a which is less powerful than the existing Pixel 3, and at a base price of $399, which is half as expensive as Pixel 3. In 2017 Forbes had done an analysis on why Google Pixel failed in the market and one of the reason was its exorbitant high price. It states that the tech giant needs to come to the realization that its brand in the phone hardware business is just not worth as much as Samsung's or Apple's that it can command the same price premium. Source: Google “Focus mode:” A new feature coming to Android P and Q devices this summer will let you turn off your most distracting apps to focus on a task, while still allowing text messages, calls, and other important notifications through. Augmented reality in Google Maps: AR is one of those technologies that always seems to impress the tech companies that make it more than it impresses their actual users. But Google may finally be finding some practical uses for it, like overlaying walking directions when you hold up your phone’s camera to the street in front of you. Incognito mode for Google Maps: It also announced a new “incognito” mode for Google Maps, which will stop keeping records of your whereabouts while it’s enabled. And they will further roll out this feature in Google Search and YouTube. Google I/O 2019: Flutter UI framework now extended for Web, Embedded, and Desktop You can now permanently delete your location history, and web and app activity data on Google Google’s Sidewalk Lab smart city project threatens privacy and human rights: Amnesty Intl, CA says
Read more
  • 0
  • 0
  • 16143
article-image-key-takeaways-from-unity-game-studio-report-2018
Natasha Mathur
31 Aug 2018
3 min read
Save for later

Key Takeaways from the Unity Game Studio Report 2018

Natasha Mathur
31 Aug 2018
3 min read
The Unity Team has come out with  Unity Game Studio Report 2018 to share insights of its relevant benchmarking data on the existing studios with other emerging studios. The aim is to share information with the emerging studios on how the fellow creative studio teams operate and make successful games. The Unity Game Studio Report 2018 has been collated based on a study with the leads of 1,445 small and medium independent creative studios (ranging in size from 2 to 50 employees) from across the globe. This includes the studios using Unity as their main game engine as well as the studios using other game engines. https://www.youtube.com/watch?v=4hoO_5qNel0 Unity Game Studio Report 2018 Let’s have a look at few of the major highlights of this report. Studios are recent, independent and compact As per the 2018 Unity Game Studio report, 91% of the surveyed studios that have been recently established are fully independent and the majority of them are developing their own IPs. Studios develop, publish and promote games on their own                                          Unity Game studio report 2018 40% of the existing and emerging studios are focussed on developing AR/VR, which proves that platforms are becoming more established among independent creators. The majority of studios are publishing their project themselves. For marketing, the popular media for these studios are Facebook and Twitter. The Unity Game Studio report also highlights that 53% of the studios monetize their projects via premium payments, while 36% of them plan on monetizing with the freemium model.   Studios need a wide range of tools to run 69% of the emerging studios use team collaboration along with cloud storage solutions. Less than 40% of the studios use analytics to analyze players’ behavior. Studios run on a lean budget                                               Unity Game studio report 2018 As mentioned in the Unity Game Studio Report, approximately 60% of the budget for all studios comes from freelancing and self-funding. But, a small part from their budget still gets spent on training the employees. The report also highlights the hard work that the majority of the independent game studios put in to continue to establish themselves. “Not only do (independent developers) bring their creative vision to life, they do so with ingenuity, flair, and lots of bootstraps, overcoming challenges posed by constrained resources with imagination, moxie, and dedication to their love of creating games”, as written by Jen MacLean, Executive Director at the International Game Developers Association (IGDA) in the report foreword. For more information, check out the complete Unity Game Studio Report 2018.   Unity switches to WebAssembly as the output format for the Unity WebGL build target Implementing the Unity game engine and assets for 2D game development [Tutorial] Designing UIs in Unity: What you should know
Read more
  • 0
  • 0
  • 16116

article-image-meet-widenes-a-new-tool-by-nintendo-to-let-you-experience-the-nes-classics-again
Natasha Mathur
30 Aug 2018
4 min read
Save for later

Meet wideNES: A new tool by Nintendo to let you experience the NES classics again

Natasha Mathur
30 Aug 2018
4 min read
Nintendo has come out with a new tool, called, wideNES, to let you relive your childhood days. Only this time, you can record the screen while playing in real-time, gradually building up a map of the different levels explored. The new tool wideNES, is a feature of ANESE, which is an NES emulator developed by Daniel Prilik. What’s great about wideNES is the fact that it syncs the action on-screen to the generated map, thereby, allowing players to see ahead of the levels by “peeking past the edge of the NES’s screen”. Also, this mapping technique is not applicable to only a few games i.e. it enables the wideNES to work with a wide range of NES games. Let’s look at how wideNES works. Rendering graphics Back in the 80s, the NES (Nintendo entertainment system) used MOS 6502 CPU. It also used a powerful graphics coprocessor called the Picture Processing Unit (PPU) in conjunction with the 6502 CPU. The wideNES also makes use of PPU. PPU is an integrated circuit in the NES which generates video signals from graphics data stored in memory. The chip is known for using very little memory to store graphical data. In wideNES, the CPU updates the PPU on what has changed throughout the game using Memory Mapped I/O.  This process comprises of setting up new sprite positions ( Great for moving objects: player, enemies, projectiles), new level data, and new viewport offsets. With wideNES running in an emulator, it’s easy to track the values written to the PPUSCROLL register (controls viewport X/Y offset) i.e. it’s easy to measure how much of the screen has been scrolled between two frames. But, there’s a limitation to this technique as you can’t get a complete map of the game unless the player manually explores the entire game. Scrolling past 256 The NES is an 8-bit system and in this, the PPUSCROLL register accepts only 8-bit values. This limited the maximum scroll-offset in NES to just 255px. So, on scrolling past 255, PPUSCROLL register would become 0, explaining why Smart Mario Bros would bounce-back to the start on Mario moving too far right. With wideNES, scrolling past 256 is possible as it completely ignores the PPUCTRL register, and simply looks at the PPUSCROLL delta between frames. So, in case the PPUSCROLL unexpectedly jumps up to ~256, it indicates that the player character has moved left/up a screen, whereas if the PPUSCROLL jumps down to ~0, then that means the player has moved right/down a screen. However, this approach does not work for games that have static-UI elements such as HUDs, Masks, and Status Bars at the edges of the screen. To solve this issue, wideNES implements several rules which detect and mask-off static screen elements automatically. Detecting “Scenes” Most NES games are split into many smaller “scenes” with doors or transition screens that move between them. The wideNES uses perceptual hashing to detect whenever a scene changes. Perceptual hash functions work on keeping the similar inputs “close” to one another in the output space making them perfect for detecting similar images. But, perceptual hashes can also get incredibly complex with some being able to detect similar images even if few of the images have been rotated, scaled, stretched, and color shifted. But, wideNES doesn’t need a complex hash function as each frame is always the exact same size. Now, work is still being done on improving wideNES core and on improving ANESE’s wideNES implementation. For now, you can explore the ANESE emulator and take the trip down the memory lane! For more information, check out the official wideNES blog post. Meet yuzu – an experimental emulator for the Nintendo Switch AI for game developers: 7 ways AI can take your game to the next level AI for Unity game developers: How to emulate real-world senses in your NPC agent behavior
Read more
  • 0
  • 0
  • 16026
Modal Close icon
Modal Close icon