Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-soft-skills-data-scientists-teach-child
Aaron Lazar
09 Nov 2017
7 min read
Save for later

Soft skills every data scientist should teach their child

Aaron Lazar
09 Nov 2017
7 min read
Data Scientists work really hard to upskill their technical competencies. A rapidly changing technology landscape demands a continuous ramp up of skills like mastering a new programming language like R, Python, Java or something else, exploring new machine learning frameworks and libraries like TensorFlow or Keras, understanding cutting-edge algorithms like Deep Convolutional Networks and K-Means to name a few. Had they lived in Dr.Frankenstein's world, where scientists worked hard in their labs, cut-off from the rest of the world, this should have sufficed. But in the real world, data scientists use data and work with people to solve real-world problems for people. They need to learn something more, that forms a bridge between their ideas/hypotheses and the rest of the world. Something that’s more of an art than a skill these days. We’re talking about soft-skills for data scientists. Today we’ll enjoy a conversation between a father and son, as we learn some critical soft-skills for data scientists necessary to make it big in the data science world. [box type="shadow" align="" class="" width=""] One chilly evening, Tommy is sitting with his dad on their grassy backyard with the radio on, humming along to their favourite tunes. Tommy, gazing up at the sky for a while, asks his dad, “Dad, what are clouds made of?” Dad takes a sip of beer and replies, “Mostly servers, son. And tonnes of data.” Still gazing up, Tommy takes a deep breath, pondering about what his dad just said. Tommy: Tell me something, what’s the most important thing you’ve learned in your career as a Data Scientist? Dad smiles: I’m glad you asked, son. I’m going to share something important with you. Something I have learned over all these years crunching and munching data. I want you to keep this to yourself and remember it for as long as you can, okay? Tommy: Yes dad. Dad: Atta boy! Okay, the first thing you gotta do if you want to be successful, is you gotta be curious! Data is everywhere and it can tell you a lot. But if you’re not curious to explore data and tackle it from every angle, you will remain mediocre at best. Have an open mind - look at things through a kaleidoscope and challenge assumptions and presumptions. Innovation is the key to making the cut as a data scientist. Tommy nods his head approvingly. Dad, satisfied that Tommy is following along, continues. Dad: One of the most important skills a data scientist should possess is a great business acumen. Now, I know you must be wondering why one would need business acumen when all they’re doing is gathering a heap of data and making sense of it. Tommy looks straight-faced at his dad. Dad: Well, a data scientist needs to know the business like the back of their hand because unless they do, they won’t understand what the business’ strengths and weaknesses are and how data can contribute towards boosting its success. They need to understand where the business fits into the industry and what it needs to do to remain competitive. Dad’s last statement is rewarded by an energetic, affirmative nod from Tommy. Smiling, dad’s quite pleased with the response. Dad: Communication is next on the list. Without a clever tongue, a data scientist will find himself going nowhere in the tech world. Gone are the days when technical knowledge was all that was needed to sustain. A data scientist’s job is to help a business make critical, data-driven decisions. Of what use is it to the non-technical marketing or sales teams, if the data scientist can’t communicate his/her insights in a clear and effective way? A data scientist must also be a good listener to truly understand what the problem is to come up with the right solution. Tommy leans back in his chair, looking up at the sky again, thinking how he would communicate insights effectively. Dad continues: Very closely associated with communication, is the ability to present well, or as a data scientist would put it - tell tales that inspire action. Now a data scientist might have to put forward their findings before an entire board of directors, who will be extremely eager to know why they need to take a particular decision and how it will benefit the organization. Here, clear articulation, a knack for storytelling and strong convincing skills are all important for the data scientist to get the message across in the best way. Tommy quips: Like the way you convince mom to do the dishes every evening? Dad playfully punches Tommy: Hahaha, you little rascal! Tommy: Are there any more skills a data scientist needs to possess to excel at what they do? Dad: Indeed, there are! True data science is a research activity, where problems with unclear or unobvious solutions get solved. There are times when even the nature of the problem isn’t clear. A data scientist should be skilled at performing their own independent research - snooping around for information or data, gathering it and preparing it for further analysis. Many organisations look for people with strong research capabilities, before they recruit them. Tommy: What about you? Would you recruit someone without a research background? Dad: Well, personally no. But that doesn’t mean I would only hire someone if they were a PhD. Even an MSc would do, if they were able to justify their research project, and convince me that they’re capable of performing independent research. I wouldn’t hesitate to take them on board. Here’s where I want to share one of the most important skills I’ve learned in all my years. Any guesses on what it might be? Tommy: Hiring? Dad: Ummmmm… I’ll give this one to you ‘cos it’s pretty close. The actual answer is, of course, a much broader term - ‘management’. It encompasses everything from hiring the right candidates for your team to practically doing everything that a person handling a team does. Tommy: And what’s that? Dad: Well, as a senior data scientist, one would be expected to handle a team of lesser experienced data scientists, managing, mentoring and helping them achieve their goals. It’s a very important skill to hone, as you climb up the ladder. Some learn it through experience, others learn it by taking management courses. Either way, this skill is important for one to succeed in a senior role. And, that’s about all I have for now. I hope at least some of this benefits you, as you step into your first job tomorrow. Tommy smiles: Yeah dad, it’s great to have someone in the same line of work to look up to when I’m just starting out my career. I’m glad we had this conversation. Holding up an empty can, he says, “I’m out, toss me another beer, please.”[/box] Soft Skills for Data Scientists - A quick Recap In addition to keeping yourself technically relevant, to succeed as a data scientist you need to Be curious: Explore data from different angles, question the granted - assumptions & presumptions. Have strong business acumen: Know your customer, know your business, know your market. Communicate effectively: Speak the language of your audience, listen carefully to understand the problem you want to solve. Master the art of presenting well: Tell stories that inspire action, get your message across through a combination of data storytelling, negotiation and persuasion skills Be a problem solver: Do your independent research, get your hands dirty and dive deep for answers. Develop your management capabilities: Manage, mentor and help other data scientists reach their full potential.
Read more
  • 0
  • 0
  • 25320

article-image-textmate-2-0-the-text-editor-for-macos-releases
Amrata Joshi
16 Sep 2019
3 min read
Save for later

TextMate 2.0, the text editor for macOS releases

Amrata Joshi
16 Sep 2019
3 min read
Yesterday, the team behind TextMate released TextMate 2.0. They announced that the code for TextMate 2.0 is available via the GitHub repository. In 2012, the team had open-sourced the alpha version of TextMate 2.0.  One of the reasons why the company open-sourced the code for TextMate 2.0 was to indicate that Apple isn’t limiting user and developer freedom on the Mac platform. In this release, the qualifier suffix in the version string has been deprecated and even the 32 bit APIs have been replaced. This release comes with improved accessibility support. What’s new in TextMate 2.0? Makes swapping easy This release allows users to easily swap pieces of code. Makes search results convenient TextMate presents the results of the search in a way that users can switch between matches, extract matched text and preview desired replacements. Version control  Users can see changes in the file browser view and they can check the changes made to lines of code in the editor view. Improved commands  TextMate features WebKit as well as a dialog framework for Mac-native or HTML-based interfaces. Converting code pieces into snippets  Users can now turn commonly used pieces of text or code into snippets with transformations, placeholders, and more. Bundles Users can use bundles for customization and a number of different languages, workflows, markup systems, and more.  Macros  TextMate features Marcos that eliminates repetitive work.  This project was supposed to release years ago and now it has finally released that makes a lot of users happy.  A user commented on GitHub, “Thank you @sorbits. For making TextMate in the first place all those years ago. And thank you to everyone who has and continues to contribute to the ongoing development of TextMate as an open source project. ~13 years later and this is still the only text editor I use… all day every day.” Another user commented, “Immense thanks to all those involved over the years!” A user commented on HackerNews, “I have a lot of respect for Allan Odgaard. Something happened, and I don't want to speculate, that caused him to take a break from Textmate (version 2.0 was supposed to come out 9 or so years ago). Instead of abandoning the project he open sourced it and almost a decade later it is being released. Textmate is now my graphical Notepad on Mac, with VS Code being my IDE and vim my text editor. Thanks Allan.” It is still not clear as to what took TextMate 2.0 this long to get released. According to a few users on HackerNews, Allan Odgaard, the creator of TextMate wanted to improve the designs in TextMate 1 and he realised that it would require a lot of work to do the same. So he had to rewrite everything that might have taken away his time. Another comment reads, “As Allan was getting less feedback about the code he was working on, and less interaction overall from users, he became less motivated. As the TextMate 2 project dragged past its original timeline, both Allan and others in the community started to get discouraged. I would speculate he started to feel like more of the work was a chore rather than a joyful adventure.” To know more about this news, check out the release notes. Other interesting news in Programming Introducing ‘ixy’, a simple user-space network driver written in high-level languages like Rust, Go, and C#, among others  GNOME 3.34 releases with tab pinning, improved background panel, custom folders and more! GitHub Package Registry gets proxy support for the npm registry  
Read more
  • 0
  • 0
  • 25252

article-image-4093-2
Savia Lobo
05 Feb 2018
6 min read
Save for later

AutoML : Developments and where is it heading to

Savia Lobo
05 Feb 2018
6 min read
With the growing demand in ML applications, there is also a demand for machine learning tasks such as data preprocessing, optimizing model hyperparameters and so on to be easily handled by non-experts. This is because, these tasks were repetitive and due to the complexity were considered to be handled only by ML experts. To support this cause and to maintain off-the-shelf quality of machine learning methods without expert knowledge, Google came out with a project named AutoML, an approach that automates designing of ML models. You could also refer to our article on Automated Machine Learning (AutoML) for a clear understanding on how AutoML functions. Trying AutoML on smaller datasets AutoML brought in altogether new dimensions within machine learning workflows where repetitive tasks performed by human experts could be taken over by machines. When Google started off with AutoML, they applied the AutoML approach onto two smaller datasets in DL namely, CIFAR-10 and Penn Treebank to test them on image recognition and language modeling tasks respectively. The result was, AutoML approach could design models that were at par with the ones designed by the ML experts. Also, on comparing the designs drafted by humans and AutoML, it was seen that the machine-suggested architecture included new elements. These elements were later known to alleviate gradient vanishing/exploding issues, which concludes that the machines provided a new architecture which could be more useful for multiple tasks. Also, the machine designed architecture has many channels so that the gradients could flow backwards. This could help explain why LSTM RNNs work better than standard RNNs. Trying AutoML on larger datasets After a success in small scale datasets, Google tested AutoML on large scale datasets such as ImageNet and COCO object detection dataset. Testing AutoML on these was a challenge because of their higher orders of magnitude, and also because simply applying AutoML directly to ImageNet would require many months of training the AutoML method. In order to apply AutoML to large scale datasets, some alterations were made within the AutoML approach for it to be more tractable to large scale datasets. The changes include: Redesigning the search space so that AutoML could find the best layer which can then be stacked many times in a flexible manner to create a final network. Carry out architecture search on CIFAR-10 dataset and transfer the best learned architecture to ImageNet image classification and COCO object detection datasets. Thus, AutoML could find out two best layers i.e normal cell and reduction cell, which when combined resulted into a novel architecture called as “NASNet”. These two work well with CIFAR-10, and also ImageNet and COCO object detection. NASNet was seen to have a prediction accuracy of 82.7% on the validation, as stated by Google. Such an accuracy surpassed all previous inception models built by Google. Further, the learned features from the ImageNet classification were transferred to carry out object detection tasks using the COCO dataset. The learned features combined with a faster R-CNN  resulted into a state-of-the-art predictive performance on the COCO object detection task in both the largest as well as mobile-optimized models. Google suspected that these image features learned by ImageNet and COCO can be reused for various other computer vision applications. Hence, Google open-sourced NASNet for inference on image classification and for object detection in the Slim and Object Detection TensorFlow repositories. Towards Cloud AutoML: Automated Machine learning platform for everyone Cloud AutoML has been Google’s latest buzz for its customers as it makes AI available for everyone. Using Google’s advanced techniques such as learning2learn and transfer learning, Cloud AutoML helps businesses having limited ML expertise, to start building their own high-quality custom models. Thus, Cloud AutoML benefits AI experts by improving their productivity and explore new fields in AI. The experts can also aid less-skilled engineers to build powerful systems. Companies such as Disney and Urban Outfitters are using AutoML for making search and shopping on their websites more relevant. With AutoML going on cloud, Google released its first Cloud AutoML product, Cloud AutoML Vision, an Image Recognition tool that enables fast and easy to build custom ML models. This tool has a drag-and-drop interface that allows one to easily upload images, train and manage the models, and then deploy those trained models directly on Google Cloud. When used to classify popular public datasets like ImageNet and CIFAR, Cloud AutoML Vision  has shown state-of-the-art results. These results included fewer misclassifications than the generic ML APIs results.    Here are some highlights on Cloud AutoML vision: It is built on Google’s leading image recognition approaches, along with transfer learning and neural architecture search technologies. Hence, one can expect an accurate model even if the business has a limited expertise in ML. One can build a simple model in minutes or a full, production-ready model in a day in order to pilot AI-enabled application. AutoML Vision has a simple graphical UI using which one can easily specify data. It later turns the data into a high quality model customized for one’s specific needs. Starting off with Images, Google plans to roll out Cloud AutoML tools and services for text and audio too. However, Google isn’t the only one in the race; other competitors including AWS and Microsoft are also bringing in tools such as Amazon’s SageMaker and Microsoft’s service for customizing Image recognition model, to aid developers with automating machine learning. Some other automated tools include: Auto-sklearn: An automated project that aids scikit-learn project--package of common machine learning functions--to choose the right estimator function. The Auto-sklearn includes a generic estimator function that conducts analysis to determine the best algorithm and set of hyperparameters for a given Scikit-learn job. Auto-WEKA : An inspiration from the Auto-sklearn is for machine learners using Java programming language and the Weka ML package. Auto-WEKA uses a fully automated approach to select a learning algorithm and sets its hyperparameters, unlike previous methods which used to address this in isolation. H2o Driverless AI : This uses a web-based UI and is specifically designed for business users who want to gain insights from data but do not want to get into the intricacies of machine learning algorithms. This tool allows users to choose one or multiple target variables in the dataset that needs a solution, and the system provides the answer. The results are in the form of interactive charts, explained with annotations in plain English. Currently, Google’s AutoML is leading them. It would be exciting to see how Google scales an automated ML environment exactly the same as traditional ML.   Not only Google, but also other businesses are contributing to the movement towards adopting an automated machine learning ecosystem. We saw some tools joining the automation league and can expect more tools to join them. Also, these tools could go on cloud in future for an extended availability for non-experts, similar to the AutoML cloud by Google. With machine learning going automated, we can expect more and more systems to move a step closer to widening the scope for AI.  
Read more
  • 0
  • 0
  • 25232

article-image-ai-can-now-help-speak-your-mind-uc-researchers-introduce-a-neural-decoder-that-translates-brain-signals-to-natural-sounding-speech
Bhagyashree R
29 Apr 2019
4 min read
Save for later

AI can now help speak your mind: UC researchers introduce a neural decoder that translates brain signals to natural-sounding speech

Bhagyashree R
29 Apr 2019
4 min read
In a research published in the Nature journal on Monday, a team of neuroscientists from the University of California, San Francisco, introduced a neural decoder that can synthesize natural-sounding speech based on brain activity. This research was led by Gopala Anumanchipalli, a speech scientist, and Josh Chartier, a bioengineering graduate student in the Chang lab. It is being developed in the laboratory of Edward Chang, a Neurological Surgery professor at University of California. Why is this neural decoder being introduced? There are many cases of people losing their voice because of stroke, traumatic brain injury, or neurodegenerative diseases such as Parkinson’s disease, multiple sclerosis, and amyotrophic lateral sclerosis. Currently,assistive devices that track very small eye or facial muscle movements to enable people with severe speech disabilities express their thoughts by writing them letter-by-letter, do exist. However, generating text or synthesized speech with such devices is often time consuming, laborious, and error-prone. Another limitation these devices have is that they only permit generating a maximum of 10 words per minute, compared to the 100 to 150 words per minute of natural speech. This research shows that it is possible to generate a synthesized version of a person’s voice that can be controlled by their brain activity. The researchers believe that in future, this device could be used to enable individuals with severe speech disability to have fluent communication. It could even reproduce some of the “musicality” of the human voice that expresses the speaker’s emotions and personality. “For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity,” said Chang. “This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.” How does this system work? This research is based on another study by Josh Chartier and Gopala K. Anumanchipalli, which shows how the speech centers in our brain choreograph the movements of the lips, jaw, tongue, and other vocal tract components to produce fluent speech. In this new study, Anumanchipalli and Chartier asked five patients being treated at the UCSF Epilepsy Center to read several sentences aloud. These patients had electrodes implanted into their brains to map the source of their seizures in preparation for neurosurgery. Simultaneously, the researchers recorded activity from a brain region known to be involved in language production. The researchers used the audio recordings of volunteer’s voice to understand the vocal tract movements needed to produce those sounds. With this detailed map of sound to anatomy in hand, the scientists created a realistic virtual vocal tract for each volunteer that could be controlled by their brain activity. The system comprised of two neural networks: A decoder for transforming brain activity patterns produced during speech into movements of the virtual vocal tract. A synthesizer for converting these vocal tract movements into a synthetic approximation of the volunteer’s voice. Here’s a video depicting the working of this system: https://www.youtube.com/watch?v=kbX9FLJ6WKw&feature=youtu.be The researchers observed that the synthetic speech produced by this system was much better as compared to the synthetic speech directly decoded from the volunteer’s brain activity. The generated sentences were also understandable to hundreds of human listeners in crowdsourced transcription tests conducted on the Amazon Mechanical Turk platform. The system is still in its early stages. Explaining its limitations, Chartier said, “We still have a ways to go to perfectly mimic spoken language. We’re quite good at synthesizing slower speech sounds like ‘sh’ and ‘z’ as well as maintaining the rhythms and intonations of speech and the speaker’s gender and identity, but some of the more abrupt sounds like ‘b’s and ‘p’s get a bit fuzzy. Still, the levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what’s currently available.” Read the full report on UCSF’s official website. OpenAI introduces MuseNet: A deep neural network for generating musical compositions Interpretation of Functional APIs in Deep Neural Networks by Rowel Atienza Google open-sources GPipe, a pipeline parallelism Library to scale up Deep Neural Network training  
Read more
  • 0
  • 0
  • 25210

article-image-are-debian-and-docker-slowly-losing-popularity
Savia Lobo
12 Mar 2019
5 min read
Save for later

Are Debian and Docker slowly losing popularity?

Savia Lobo
12 Mar 2019
5 min read
Michael Stapelbergs, in his blog, stated why he has planned to reduce his involvement towards Debian software distribution. Stapelbergs is the one who wrote the Linux tiling window manager i3, the code search engine Debian Code Search and the netsplit-free. He said, he’ll reduce his involvement in Debian by, transitioning packages to be team-maintained remove the Uploaders field on packages with other maintainers orphan packages where he is the sole maintainer Stapelbergs mentions the pain points in Debian and why he decided to move away from it. Change process in Debian Debian follows a different change process where packages are nudged in the right direction by a document called the Debian Policy, or its programmatic embodiment, lintian. This tool is not necessarily important. “currently, all packages become lint-unclean, all maintainers need to read up on what the new thing is, how it might break, whether/how it affects them, manually run some tests, and finally decide to opt in. This causes a lot of overhead and manually executed mechanical changes across packages”, Stapelbergs writes. “Granting so much personal freedom to individual maintainers prevents us as a project from raising the abstraction level for building Debian packages, which in turn makes tooling harder.” Fragmented workflow and infrastructure Debian generally seems to prefer decentralized approaches over centralized ones. For example, individual packages are maintained in separate repositories (as opposed to in one repository), each repository can use any SCM (git and svn are common ones) or no SCM at all, and each repository can be hosted on a different site. Practically, non-standard hosting options are used rarely enough to not justify their cost, but frequently enough to be a huge pain when trying to automate changes to packages. Stapelbergs said that after he noticed the workflow fragmentation in the Go packaging team, he also tried fixing this with the workflow changes proposal, but did not succeed in implementing it. Debian is hard to machine-read “While it is obviously possible to deal with Debian packages programmatically, the experience is far from pleasant. Everything seems slow and cumbersome.” debiman needs help from piuparts in analyzing the alternatives mechanism of each package to display the manpages of e.g. psql(1). This is because maintainer scripts modify the alternatives database by calling shell scripts. Without actually installing a package, you cannot know which changes it does to the alternatives database. There used to be a fedmsg instance for Debian, but it no longer seems to exist. “It is unclear where to get notifications from for new packages, and where best to fetch those packages”, Stapelbergs says. A user on HackerNews said, “I've been willing to package a few of my open-source projects as well for almost a year, and out of frustration, I've ended up building my .deb packages manually and hosting them on my own apt repository. In the meantime, I've published a few packages on PPAs (for Ubuntu) and on AUR (for ArchLinux), and it's been as easy as it could have been.” Check out what the entire blogpost by Stapelbergs. Maish Saidel-Keesing believes Docker will die soon Maish Saidel-Keesing, a Cloud & AWS Solutions Architect at CyberArk, Israel, in his blog post mentions, “the days for Docker as a company are numbered and maybe also a technology as well” https://twitter.com/maishsk/status/1019115484673970176 Docker has undoubtedly brought in the popular containerization technology. However, Saidel-Keesing says, “Over the past 12-24 months, people are coming to the realization that docker has run its course and as a technology is not going to be able to provide additional value to what they have today - and have decided to start to look elsewhere for that extra edge.” He also talks about how Open Container Initiative brought with it the Runtime Spec, which opened the door to use something else besides docker as the runtime. Docker is no longer the only runtime that is being used. “Kelsey Hightower - has updated his Kubernetes the hard way over the years from CRI-O to containerd to gvisor. All the cool kids on the block are no longer using docker as the underlying runtime. There are many other options out there today clearcontainers, katacontainers and the list is continuously growing”, Saidel-Keesing says. “What triggered me was a post from Scott Mccarty - about the upcoming RHEL 8 beta - Enterprise Linux 8 Beta: A new set of container tools” https://twitter.com/maishsk/status/1098295411117309952 Saidel-Keesing writes, “Lo and behold - no more docker package available in RHEL 8”. He further added, “If you’re a container veteran, you may have developed a habit of tailoring your systems by installing the “docker” package. On your brand new RHEL 8 Beta system, the first thing you’ll likely do is go to your old friend yum. You’ll try to install the docker package, but to no avail. If you are crafty, next, you’ll search and find this package: podman-docker.noarch : "package to Emulate Docker CLI using podman." To know more on this news, head over to Maish Saidel-Keesing’s blog post. Docker Store and Docker Cloud are now part of Docker Hub Cloud Native Application Bundle (CNAB): Docker, Microsoft partner on an open source cloud-agnostic all-in-one packaging format It is supposedly possible to increase reproducibility from 54% to 90% in Debian Buster!
Read more
  • 0
  • 0
  • 25176

article-image-github-for-unity-1-0-is-here-with-git-lfs-and-file-locking-support
Sugandha Lahoti
19 Jun 2018
3 min read
Save for later

GitHub for Unity 1.0 is here with Git LFS and file locking support

Sugandha Lahoti
19 Jun 2018
3 min read
GitHub for Unity is now available in version 1. GitHub for Unity 1.0 is a free and open source Unity editor extension that brings Git into Unity 5.6, 2017.x, and 2018.x. GitHub for Unity was announced as an alpha version in March 2017.  The beta version was released earlier this year. Now the full release GitHub for Unity 1.0 is available just in time for Unite Berlin 2018, scheduled to happen on June 19-21. GitHub for Unity 1.0 allows you to stay in sync with your team as you can now collaborate with other developers, pull down recent changes, and lock files to avoid troublesome merge conflicts. It also introduces two key features for game developers and their teams for managing large assets and critical scene files using Git, with the same ease of managing code files. Updates to Git LFS GitHub for Unity 1.0 has improved Git and Git LFS support for Mac. Git Large File Storage (LFS) replaces large files such as audio samples, videos, datasets, and graphics with text pointers inside Git. Previously, the package included full portable installations of the Git and Git LFS. Now, these are downloaded when needed, reducing the package size to 1.6MB. Critical Git and Git LFS updates and patches are distributed faster and in a more flexible way now. File locking File locking management is now a top-level view within the GitHub window. With this new feature developers can now lock or unlock multiple files. Other features include: Diffing support to visualize changes to files. The diffing program can be customized (set in the “Unity Preferences” area) directly from the “Changes” view in the GitHub window. No hassles of command line, as developers can now view project history, experiment in branches, craft a commit from their changes and push their code to GitHub without leaving Unity. A Git action bar for essential operations. Game developers will now get a notification within Unity whenever a new version is available. They can choose to download or skip the current update. Easy email sign in. Developers can sign in to their GitHub account with their GitHub username or the email address associated with their account. GitHub for Unity 1.0 is available for download at unity.github.com and from the Unity Asset Store. Lead developer at Unity, Andreia Gaita will conduct a GitHub for Unity talk on June 19 at Unite Berlin to explain how to incorporate Git into your game development workflow. Put your game face on! Unity 2018.1 is now available Unity announces a new automotive division and two-day Unity AutoTech Summit AI for Unity game developers: How to emulate real-world senses in your NPC agent behavior
Read more
  • 0
  • 0
  • 25173
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-tensorflow-1-10-rc0-released
Amey Varangaonkar
24 Jul 2018
2 min read
Save for later

Tensorflow 1.10 RC0 released

Amey Varangaonkar
24 Jul 2018
2 min read
Continuing the recent trend of rapid updates introducing significant fixes and new features, Google have released the first release candidate for Tensorflow 1.10. TensorFlow 1.10 RC0 brings some improvements in model training and evaluation, and also how Tensorflow runs in a local environment. This is Tensorflow’s fifth update release in just over a month, which includes two major version updates, the previous one being Tensorflow 1.9 What’s new in Tensorflow 1.10 RC0? The tf.contrib.distributions module will be deprecated in this version. This module is primarily used to work with statistical distributions Upgrade to NCCL  2.2 will be mandatory in order to perform GPU computing with this version of Tensorflow, for added performance and efficiency. Model training speed can now be optimized by improving the communication between the model and the Tensorflow resources. For this, the RunConfig function has been updated in this version. The Tensorflow development team also announced support for Bazel - a popular build and testing automation software - and deprecated support for cmake starting with Tensorflow 1.11. This version also incorporated some bug fixes and performance improvements to the tf.data, tf.estimator and other related modules. To get full details on the features list of this release candidate, you can check out Tensorflow’s official release page on Github. No news on Tensorflow 2.0 yet Many developers were expecting the next major release of Tensorflow, Tensorflow 2.0, to be released in late July or August. However, the announcement of this release candidate and the mention of the next version update (1.11) means they will have to wait for some more time before they get to know more about the next breakthrough release. Read more Why Twitter (finally!) migrated to Tensorflow Python, Tensorflow, Excel and more – Data professionals reveal their top tools Can a production ready Pytorch 1.0 give TensorFlow a tough time?
Read more
  • 0
  • 0
  • 25162

article-image-openai-lp-a-new-capped-profit-company-to-accelerate-agi-research-and-attract-top-ai-talent
Fatema Patrawala
12 Mar 2019
3 min read
Save for later

OpenAI LP, a new “capped-profit” company to accelerate AGI research and attract top AI talent

Fatema Patrawala
12 Mar 2019
3 min read
A move that has surprised many, OpenAI yesterday announced the creation of a new for-profit company to balance its huge expenditures into compute and AI talents. Sam Altman, the former president of Y Combinator who stepped down last week, has been named CEO of the new “capped-profit” company, OpenAI LP. But some worry that this move may result in making the innovative company no different from the other AI startups out there. With the OpenAI LP their mission is to ensure that artificial general intelligence (AGI) benefits all of humanity, primarily by attempting to build safe AGI and share the benefits with the world. OpenAI mentions on their blog that “returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress.” Any returns beyond the cap amount will revert to OpenAI. OpenAI LP’s primary obligation is to advance the aims of the OpenAI Charter. All investors and employees sign agreements that OpenAI LP’s obligation to the Charter always comes first, even at the expense of some or all of their financial stake. But the major reason behind the new for-profit subsidiary can be explicitly put up as OpenAI in need of more money. The company anticipates to spend billions of dollars in building large-scale cloud compute, attracting and retaining talented people, and developing AI supercomputers in the coming years. The cash burn rate of a top AI research company is staggering. Consider OpenAI’s recent OpenAI Five project — a set of coordinated AI bots trained to compete against human professionals in the video game Dota 2. OpenAI rented 128,000 CPU cores and 256 GPUs at approximately US$2500 per hour for the time-consuming process of training and fine-tuning its OpenAI Five models. Additionally consider the skyrocketing cost of retaining top AI talents. A New York Times story revealed that OpenAI paid its Chief Scientist Ilya Sutskever more than US$1.9 million in 2016. The company currently employs some 100 pricey talents for developing its AI capabilities, safety, and policies. OpenAI LP will be governed by the original OpenAI Board. Only a few on the Board of Directors are allowed to hold financial stakes, and those who do not will be able to vote on decisions if the financial interests are seen to conflict with OpenAI’s mission. People have linked the new for-profit company with OpenAI’s recent controversial decision to withhold the code and training dataset for their language model GPT-2, ostensibly due concerns they might be used for malicious purposes such as generating fake news. A tweet from a software engineer suggested an ulterior motive: “I now see why you didn’t release the fully trained model of #gpt2”. OpenAI Chairman and CTO Greg Brockman shot back: “Nope. We aren’t going to commercialize GPT-2.” OpenAI aims to forge a sustainable path towards long-term AI development. And it also plans to strike a balance between benefiting humanity and turning a profit. A big part of OpenAI’s appeal to top AI talents is it's not-for-profit character — will OpenAI LP mar that? And can OpenAI really strike a balance between benefiting humanity and turning a profit? Whether the for-profit shift will accelerate OpenAI’s mission or prove a detrimental detour remains to be seen, but the journey ahead is bound to be challenging. OpenAI’s new versatile AI model, GPT-2 can efficiently write convincing fake news from just a few words  
Read more
  • 0
  • 0
  • 25018

article-image-unity-2d-3d-game-kits-simplify-unity-game-development-for-beginners
Amey Varangaonkar
18 Apr 2018
2 min read
Save for later

Unity 2D & 3D game kits simplify Unity game development for beginners

Amey Varangaonkar
18 Apr 2018
2 min read
The rise of the video game industry over the last two decades has been staggering, to say the least. Considered to be an area with massive revenue potential, we have seen a revolution in the way games are designed, developed and played across various platforms.Unity, the most popular cross-platform game development platform, is now encouraging even the non-programmers to take up Unity game development by equipping them with state-of-the-art tools for designing interactive games. Unity game development simplified for non-developers These days, there are a lot of non-developers, game designers and even artists who wish to build their own games. Well, they are now in for a treat. Unity have come up with their 2D and 3D Game kits wherein the users develop 2D or 3D gameplays without the need to code. With the help of these game kits, beginners can utilize the elements, tools and systems within the kit to design their gameplay. The Unity 2D game kit currently supports versions Unity 2017.3 and higher, while the 3D game kit is supported by Unity 2018.1 or higher. Visual scripting with Bolt Unity  have also introduced a new visual scripting tool called Bolt, which allows non-programmers to create new gameplays from scratch and design interactive systems in Unity, without having to write a single line of code. With live editing, predictive debugging and a whole host of other features, Bolt ensures you can get started with designing your own game in no time at all. The idea of introducing these game kits and the Bolt scripting engine is to encourage more and more non-programmers to take up game development and let their creative juices flow. It will also serve as a starting point for absolute beginners to start their journey in game development. To know more about how to use these Unity game kits, check out the introduction to game kit by Unity.
Read more
  • 0
  • 0
  • 24944

Matthew Emerick
25 Aug 2020
2 min read
Save for later

React Newsletter #227 from ui.dev's RSS Feed

Matthew Emerick
25 Aug 2020
2 min read
Articles Build A Confirmation Modal in React with State Machines In this article, Dave builds a reusable state machine using React and Robot to handle this modal confirmation flow, and wraps it up into a custom hook. Why the OKCupid team decided against using GraphQL for local state management The OKCupid team describes themselves as “pretty big fans of using GraphQL.” So why did they decide against using it for local state management? Read the article to find out. Introduction to props in React This one’s for the beginners. In this post you’ll learn how to properly use props to pass data to components in React. State of Frontend 2020 Survey results Some interesting results in the state of frontend 2020 survey, specifically around React, Redux, and Gatsby vs Next. Tutorials 8 ways to deploy a React app for free This tutorial demonstrates how to deploy a React application in eight different ways. All the services described in this post are completely free with no hidden credit card requirements. Sponsor React developers are in demand on Vettery Vettery is an online hiring marketplace that’s changing the way people hire and get hired. Ready for a bold career move? Make a free profile, name your salary, and connect with hiring managers from top employers today. Get started today. Projects React + TypeScript Cheatsheets Cheatsheets for experienced React developers getting started with TypeScript react-colorful A tiny color picker component for modern React apps. Videos Fullstack React, GraphQL, TypeScript Tutorial In this 14-hour long video, Ben Awad walks you through building a fullstack React, GraphQL, TypeScript app. This tutorial is geared towards intermediate developers looking to get their feet wet with these technologies. Ionic Framework Horizontal & SideMenu Navigation in ReactJS Application This 8-minute video demonstrates how to use Window.matchMedia() to do media queries and get a result back, and based on that result to hide or show the side menu.
Read more
  • 0
  • 0
  • 24930
Matthew Emerick
18 Aug 2020
2 min read
Save for later

React Newsletter #226 from ui.dev's RSS Feed

Matthew Emerick
18 Aug 2020
2 min read
News Storybook 6.0 is released Storybook 6.0 is a lot easier to set up and also incorporates many best practices for component-drive development. Other highlights include: Zero-configuration setup Next-gen, dynamic story format Live edit component examples The ability to combine multiple story books Rome: A new toolchain for JavaScript Sebastian McKenzie announced Rome’s first beta release last week, and called it “the spiritual successor of Babel” (he’s allowed to say that because he created Babel). “Rome is designed to replace Babel, ESLint, webpack, Prettier, Jest, and others” We wrote more in depth about Rome in yesterday’s issue of Bytes. Articles Understanding React’s useRef Hook In this article you’ll learn everything you’d ever want to know about React’s useRef Hook including but not limited to how you can recreate it with useState - because, why not? A Guide to Commonly Used React Component Libraries This guide gives some helpful background info and the pros and cons of various well-known component libraries. Tutorials Build a Landing Page with Chakra UI - Part 1 This tutorial series will teach you how to build a responsive landing page in React using the Chakra UI design system. This first part goes over how to set up your landing page and build the hero section. How to setup HTTPS locally with create-react-app This tutorial goes over how to serve a local React app via HTTPS. You’ll be setting up HTTPS in development for a create-react-app with an SSL certificate. Sponsor React developers are in demand on Vettery Vettery is an online hiring marketplace that’s changing the way people hire and get hired. Ready for a bold career move? Make a free profile, name your salary, and connect with hiring managers from top employers today. Get started today. Projects Flume An open-source library that provides a node editor for visual programming and a runtime engine for executing logic in any JS environment (also portable to non-js). Vite + React + Tailwind CSS starter This is a simple setup using Vite, React and Tailwind for faster prototyping. Videos How the React Native Bridge works This short video from Jimmy Cook gives a helpful deep dive into the React Native bridge and how communication between the native side and the JavaScript side will change in the future.
Read more
  • 0
  • 0
  • 24903

article-image-python-in-visual-studio-code-released-with-enhanced-variable-explorer-data-viewer-and-more
Amrata Joshi
27 Apr 2019
3 min read
Save for later

Python in Visual Studio Code released with enhanced Variable Explorer, Data Viewer, and more!

Amrata Joshi
27 Apr 2019
3 min read
This week, the team at Python announced the release of Python Extension for Visual Studio Code. This release comes with enhanced variable explorer and data viewer and improvements to the Python Language Server. What’s new in Python in Visual Studio Code? Enhanced Variable Explorer and Data Viewer This release comes with a built-in Variable Explorer along with a Data Viewer, which will help the users to easily view, inspect and filter the variables in the application, including lists, NumPy arrays, pandas data frames, and more. This release shows a section for variables while running code and cells in the Python Interactive window. On expanding it, users can see a list of the variables in the current Jupyter session. More variables will automatically show up as they get used in the code. And users can sort the variables in columns by clicking on each column header. Users can now double-click on each row or use the “Show variable in Data Viewer” button in order to view full data of each variable in the newly-added Data Viewer and can perform a simple search over its values. Improvements to debug configuration In this release, the process of configuring the debugger has now been simplified. If a user starts debugging through the Debug Panel and no debug configuration exists, then the users will now be prompted to create a debug configuration for their application. Instead of manually configuring the launch.json file, users can now create a debug configuration through a set of menus. Improvements to the Python Language Server This release comes with fixes and improvements to the Python Language Server. The team has added back the features that were removed in the 0.2 release including “Rename Symbol”, “Go to Definition” and “Find All References”. Also, the loading time and memory usage have been improved while importing scientific libraries such as pandas, Plotly, PyQt5, especially while running in full Anaconda environments.   Read Also: Visualizing data in R and Python using Anaconda [Tutorial] Major changes In this release, the default behavior of debugger has been changed to display return values. “Unit Test” has been renamed to “Test” or “Testing”. The debugStdLib setting has been replaced with justMyCode. This release comes with setting to just enable/disable the data science codelens. The reliability of test discovery while using pytest has been improved. Bug Fixes The issues with cell spacing have been resolved. Problems with errors not showing up for import have been fixed. Issues with the tabs in the comments section have been fixed. To know more about this news, check out Microsoft’s official blog post. Mozilla introduces Pyodide, a Python data science stack compiled to WebAssembly Microsoft introduces Pyright, a static type checker for the Python language written in TypeScript Debugging and Profiling Python Scripts [Tutorial]  
Read more
  • 0
  • 0
  • 24899

article-image-blender-2-80-released-with-new-ui-interface-eevee-real-time-renderer-grease-pencil
Bhagyashree R
31 Jul 2019
3 min read
Save for later

Blender 2.80 released with a new UI interface, Eevee real-time renderer, grease pencil, and more

Bhagyashree R
31 Jul 2019
3 min read
After about three long years of development, the most awaited Blender version, Blender 2.80 finally shipped yesterday. This release comes with a redesigned UI interface, workspaces, templates, Eevee real-time renderer, grease pencil, and much more. The user interface is revamped with a focus on usability and accessibility Blender’s user interface is revamped with a better focus on usability and accessibility. It has a fresh look and feel with a dark theme and modern icon set. The icons change color based on the theme you select so that they are readable against bright or dark backgrounds. Users can easily access the most used features via the default shortcut keys or map their own. You will be able to fully use Blender with a one-button trackpad or pen input as it now supports the left mouse button by default for selection. It provides a new right-click context menu for quick access to important commands in the given context. There is also a Quick Favorites popup menu where you can add your favorite commands. Get started with templates and workspaces You can now choose from multiple application templates when starting a new file. These include templates for 3D modeling, shading, animation, rendering, grease pencil based 2D drawing and animation, sculpting, VFX, video editing, and the list goes on. Workspaces give you a screen layout for specific tasks like modeling, sculpting, animating, or editing. Each template that you choose will provide a default set of workspaces that can be customized. You can create new workspaces or copy from the templates as well. Completely rewritten 3D Viewport Blender 2.8’s completely rewritten 3D viewport is optimized for modern graphics and offers several new features. The new Workbench render engine helps you get work done in the viewport for tasks like scene layout, modeling, and sculpting. Viewport overlays allow you to decide which utilities are visible on top of the render. The LookDev new shading mode allows you to test multiple lighting conditions (HDRIs) without affecting the scene settings. The smoke and fire simulations are overhauled to make them look as realistic as possible. Eevee real-time renderer Blender 2.80 has a new physically-based real-time renderer called Eevee. It performs two roles: a renderer for final frames and the engine driving Blender's real-time viewport for creating assets. Among the various features it supports volumetrics, screen-space reflections and refractions, depth of field, camera motion blur, bloom, and much more. You can create Eevee materials using the same shader nodes as Cycles, which makes it easier to render existing scenes. 2D animation with Grease Pencil Grease Pencil enables you to combine 2D and 3D worlds together right in the viewport. With this release, it has now become a “full 2D drawing and animation system.” It comes with a new multi-frame edition mode with which you can change and edit several frames at the same time. It has a build modifier to animate the drawings similar to the Build modifier for 3D objects. There are many other features added to grease pencil. Watch this video to get a glimpse of what you can create with it: https://www.youtube.com/watch?v=JF3KM-Ye5_A Check out for more features in Blender 2.80 on its official website. Blender celebrates its 25th birthday! Following Epic Games, Ubisoft joins Blender Development fund; adopts Blender as its main DCC tool Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects  
Read more
  • 0
  • 0
  • 24887
Matthew Emerick
01 Sep 2020
2 min read
Save for later

React Newsletter #228 from ui.dev's RSS Feed

Matthew Emerick
01 Sep 2020
2 min read
Articles React Component Patterns In this article, Alexi Taylor will help you to identify the trade-offs of the different React patterns and when each pattern would be most appropriate. These patterns will allow for more useful and reusable code by adhering to design principles like separation of concern, DRY, and code reuse. Each major pattern includes an example hosted on CodeSandBox. Redux vs React’s Context API For the last few years, Redux has been THE state management solution for bigger React apps. It’s far from being dead and yet, a strong enemy is arising: React’s Context API! In this article you’ll learn, What is Redux? What is React’s Context API? Will React’s Context API replace Redux? Tutorials Morphing SVG With react-spring In this tutorial, Mikael gives a helpful overview on how to add the popular morphing effect to an SVG using the react-spring animation library. Sponsor React developers are in demand on Vettery Vettery is an online hiring marketplace that’s changing the way people hire and get hired. Ready for a bold career move? Make a free profile, name your salary, and connect with hiring managers from top employers today. Get started today. Projects Intro to Storybook Two Storybook maintainers just released this new collection of guides that walks through all of the new Storybook features, while still covering the fundamentals. Effectful JavaScript Debugger A new JavaScript/TypeScript debugger with hot-swapping, API & Persistent state, time traveling and more. Zustand A small, fast and scaleable state-management solution. It has a hooks-based api, isn’t boilerplate-y or opinionated, and is “just enough to be explicit and flux-like.” Videos Why Next.js is the future of React In this 10-minute video, Lee Robinson shares why he believes that Next.js will be the go-to way to build React applications in the future. Keep in mind that Lee is a Next.js maintainer, so he might be a little biased. 🙂 RN Casts A collection of bite-sized React and React Native videos that each cover one specific topic (i.e. Handling input events in React).
Read more
  • 0
  • 0
  • 24870

article-image-is-dark-an-aws-lambda-challenger
Fatema Patrawala
01 Aug 2019
4 min read
Save for later

Is Dark an AWS Lambda challenger?

Fatema Patrawala
01 Aug 2019
4 min read
On Monday, the CEO and Co-founder of Dark, Ellen Chisa, announced the project had raised $3.5 million in funding in a Medium post. Dark is a holistic project that includes a programming language (Darklang), an editor and an infrastructure. The value of this, according to Chisa, is simple: "developers can code without thinking about infrastructure, and have near-instant deployment, which we’re calling deployless." Along with Chisa, Dark is led by CTO, Paul Biggar, who is also the founder of CircleCI, the CI/CD pioneering company. The seed funding is led by Cervin Ventures, in participation with Boldstart, Data Collective, Harrison Metal, Xfactor, Backstage, Nextview, Promus, Correlation, 122 West and Yubari. What are the key features of the Dark programming language? One of the most interesting features in Dark is that deployments take a mere 50 milliseconds. Fast. Chisa says that currently the best teams can manage deployments around 5–10 minutes, but many take considerably longer, sometimes hours. But Dark was designed to change this. It's purpose-built, Chisa seems to suggest, for continuous delivery. “In Dark, you’re getting the benefit of your editor knowing how the language works. So you get really great autocomplete, and your infrastructure is set up for you as soon as you’ve written any code because we know exactly what is required.” She says there are three main benefits to Dark’s approach: An automated infrastructure No need to worry about a deployment pipeline ("As soon as you write any piece of backend code in Dark, it is already hosted for you,” she explains.) Tracing capabilities are built into your code. "Because you’re using our infrastructure, you have traces available in your editor as soon as you’ve written any code. There's undoubtedly a clear sense - whatever users think of the end result - that everything has been engineered with an incredibly clear vision. Dark has been deployed on SaaS platform and project tracking tools Chisa highlights how some customers have already shipped entire products on Dark. Chase Olivieri, who built Altitude, a subscription SaaS providing personalized flight deals, using Drark is cited by Chisa, saying that "as a bootstrapper, Dark has allowed me to move fast and build Altitude without having to worry about infrastructure, scaling, or server management." Downside of Dark is programmers have to learn a new language Speaking to TechCrunch, Chisa admitted their was a downside to Dark - you have to learn a new language. "I think the biggest downside of Dark is definitely that you’re learning a new language, and using a different editor when you might be used to something else, but we think you get a lot more benefit out of having the three parts working together." Chisa acknowledged that it will require evangelizing the methodology to programmers, who may be used to employing a particular set of tools to write their programs. But according to her the biggest selling point is that it will remove the complexity around deployment by bringing an integrated level of automation to the process. Is Darklang basically like AWS Lambda? The community on Hacker News compares Dark with AWS Lambda, with many pessimistic about its prospects. In particular they are skeptical about the efficiency gains Chisa describes. "It only sounds maybe 1 step removed from where aws [sic] lambda’s are now," said one user. "You fiddle with the code in the lambda IDE, and submit for deployment. Is this really that much different?” Dark’s Co-founder, Paul Biggar responded to this in the thread. “Dark founder here. Yes, completely agree with this. To a certain extent, Dark is aimed at being what lambda/serverless should have been." He continues by writing: "The thing that frustrates me about Lambda (and really all of AWS) is that we're just dealing with a bit of code and bit of data. Even in 1999 when I had just started coding I could write something that runs every 10 minutes. But now it's super challenging. Why is it so hard to take a request, munge it, send it somewhere, and then respond to it. That should be trivial! (and in Dark, it is)" The team has planned to roll out the product publicly in September. To find out more more about Dark, read the team's blog posts including What is Dark, How Dark is a functional language, and How Dark allows deploys in 50ms. The V programming language is now open source – is it too good to be true? “Why was Rust chosen for Libra?”, US Congressman questions Facebook on Libra security design choices Rust’s original creator, Graydon Hoare on the current state of system programming and safety
Read more
  • 0
  • 0
  • 24853
Modal Close icon
Modal Close icon