Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-spotify-releases-chartify-a-new-data-visualization-library-in-python-for-easier-chart-creation
Natasha Mathur
19 Nov 2018
2 min read
Save for later

Spotify releases Chartify, a new data visualization library in python for easier chart creation

Natasha Mathur
19 Nov 2018
2 min read
Spotify announced, last week, that it has come out with Chartify, a new open source Python data visualization library, making it easy for data scientists to create charts. It comes with features such as concise and user-friendly syntax and consistent data formatting among others. Let’s have a look at these features in this new library. Concise and user-friendly syntax Despite the abundance of tools such as Seaborn, Matplotlib, Plotly, Bokeh, etc, used by data scientists at Spotify, chart creation has always been a major issue in the data science workflow. Chartify solves that problem as the syntax in it is considerably more concise and user-friendly, as compared to the other tools. There are suggestions added in the docstrings, allowing users to recall the most common formatting options. This, in turn, saves time, allowing data scientists to spend less time on configuring chart aesthetics, and more on actually creating charts. Consistent data formatting Another common problem faced by data scientists is that different plotting methods need different input data formats, requiring users to completely reformat their input data. This leads to data scientists spending a lot of time manipulating data frames into the right state for their charts. Chartify’s consistent input data formatting allows you to quickly create and iterate on charts since less time is spent on data munging. Chartify Other features Since a majority of the problems could be solved by just a few chart types, Chartify focuses mainly on these use cases and comes with a complete example notebook that presents the full list of chart types that Chartify is capable of generating. Moreover, adding color into charts greatly help simplify the charting process, which is why Chartify has different palette types aligned to the different use cases for color. Additionally, Chartify offers support for Bokeh, an interactive python library for data visualization, providing users the option to fall back on manipulating Chartify charts with Bokeh if they need more control. For more information, check out the official Chartify blog post. cstar: Spotify’s Cassandra orchestration tool is now open source! Spotify has “one of the most intricate uses of JavaScript in the world,” says former engineer 8 ways to improve your data visualizations
Read more
  • 0
  • 0
  • 20223

article-image-nips-finally-sheds-its-sexist-name-for-neurips
Natasha Mathur
19 Nov 2018
4 min read
Save for later

NIPS finally sheds its ‘sexist’ name for NeurIPS

Natasha Mathur
19 Nov 2018
4 min read
The ‘Neural Information Processing Systems’, or ‘NIPS’, a well-known machine learning and computational neuroscience conference adopted ‘NeurIPS’ as an alternative acronym for the conference, last week. The acronym ‘NIPS’ had been under the spotlight worldwide over the past few years as some members of the community thought of the acronym as “sexist” and pointed out that it is offensive towards women. “Something remarkable has happened in our community. The name NeurIPS has sprung up organically as an alternative acronym, and we’re delighted to see it being adopted”, mentioned the NeurIPS team. NIPS team also added that they have taken a couple of measures to support the new acronym. This is why all signage and the program booklet for the 2018 meeting will have either the full conference name or NeurIPS mentioned to refer to the conference. Sponsors have also been asked to make sure that they make the required changes within their document materials. A branding company has also been hired to get a new logo designed for the conference. Moreover, the conference site has been moved to neurips.cc. “one forward-thinking member of the community purchased neurips.com and described the site’s purpose as ‘host[ing] the conference content under a different acronym... until the board catches up,” as mentioned on NeurIPS news page. NIPS organizers had conducted a  poll, back in August, on the NIPS website asking people if they agree or disagree with the name change. Around 30% of the respondents had answered that they support the name change (28% males and about 44% females) while 31% ‘strongly disagreed’ with the name change proposal (31% male and 25% female). This had led to NIPS keeping the name as it is. However, many people were upset by the board’s decision, and when the emphasis on a name change within the community became evident, the name got revised. One such person who was greatly dissatisfied with the decision was Anima Anandkumar, director of Machine Learning at Nvidia, who had started a petition on change.org last month. The petition managed to gather 1500 supporters as of today. “The acronym of the conference is prone to unwelcome puns, such as the perhaps subversively named pre-conference “TITS” event and juvenile t-shirts such as “my NIPS are NP-hard”, that add to the hostile environment that many ML researchers have unfortunately been experiencing” reads the petition. Anima pointed out that some of these incidents trigger uncomfortable memories for many researchers who have faced harassing behavior in the past. Moreover, Anandkumar tweeted out with #ProtestNIPS in support of the conference changing its name, which received over 300 retweets. https://twitter.com/AnimaAnandkumar/status/1055262867501412352 After the board’s decision to rebrand the name, Anandkumar tweeted out thanking everyone for their support for #protestNIPS. “ I wish we could have started with a clean slate and done away with problematic legacy, but this is a compromise. I hope we can all continue to work towards better inclusion in #ml”. Other than Anandkumar, many other people had been equally active in amplifying the support for #protestNIPS. People in support of #protestNIPS Jeff Dean, head of Google AI Dean had tweeted in support of Anandkumar, saying that NIPS should take the issue of name change seriously: https://twitter.com/JeffDean/status/1055289282930176000 https://twitter.com/JeffDean/status/1063679694283857920 Dr. Elana J Fertig, Associate Professor of Applied Mathematics, Johns Hopkins Elana had also tweeted in support of #protestNIPS. “These type of attitudes cannot be allowed to prevail in ML. Women need to be welcome to these communities. #WomenInSTEM” https://twitter.com/FertigLab/status/1063908809574354944 Daniela Witten, professor of (bio)statistics, University of Washington Witten tweeted saying: “I am so disappointed in @NipsConference for missing the opportunity to join the 21st century and change the name of this conference. But maybe the worst part is that their purported justification is based on a shoddy analysis of their survey results”. https://twitter.com/daniela_witten/status/1054800517421924352 https://twitter.com/daniela_witten/status/1054800519607181312 https://twitter.com/daniela_witten/status/1054800521582731264 “Thanks to everyone who has taken the time to share thoughts and concerns regarding this important issue. We were considering alternative acronyms when the community support for NeurIPS became apparent. We ask all attendees this year to respect this solution from the community and to use the new acronym in order that the conference focus can be on science and ideas”, mentioned the NeurIPS team. NIPS 2017 Special: Decoding the Human Brain for Artificial Intelligence to make smarter decisions NIPS 2017 Special: A deep dive into Deep Bayesian and Bayesian Deep Learning with Yee Whye Teh NIPS 2017 Special: How machine learning for genomics is bridging the gap between research and clinical trial success by Brendan Frey
Read more
  • 0
  • 0
  • 14834

article-image-mozilla-v-fcc-mozilla-challenges-fccs-elimination-of-net-neutrality-protection-rules
Bhagyashree R
19 Nov 2018
4 min read
Save for later

Mozilla v. FCC: Mozilla challenges FCC’s elimination of net neutrality protection rules

Bhagyashree R
19 Nov 2018
4 min read
Last week, Mozilla announced that they along with other petitioners have filed their reply brief on the case: Mozilla v FCC. This case challenges FCC’s elimination of net neutrality protection rules. These rules were formulated for internet providers to treat all online traffic equally. What is net neutrality, anyway? It is a principle that asks internet service providers to treat all data on the internet equally. It treats the internet as a one-way lane allowing all data to flow at the same rate on the same path. But without net neutrality, the ISPs can create fast and slow lanes, decide to block sites, and charge companies more money to prioritize their content. FCC repealing the Open Internet Order A core issue to net neutrality was the confusion whether ISPs should be classified as Title I (information services) or Title II (common carrier services) services under the Communications Act of 1934. If ISPs are classified as Title II, FCC would have significant ability to regulate ISPs but would have little control over them if classified as Title I. In 2015, ISPs were reclassified as Title II services by FCC under the Open Internet Order, which gave them the authority to enforce net neutrality. This order banned the blocking and slowing of web content by internet providers, prohibited the practice of paid prioritization, and introduced a "general conduct" standard, which gave FCC the ability to investigate unethical broadband practices. In April 2017, Ajit Pai became the FCC chairman as part of the Trump Administration. He proposed to repeal the net neutrality policies and made ISPs as Title I services. When the draft of this repeal was published in May 2017 it got over 20 million comments to FCC. A majority of the people voted in favor of retaining the 2015 Open Internet Order, but FCC still repealed the order, which went in effect in June 2018. Mozilla v. FCC Mozilla alongside other companies, trade groups, states, and organizations filed a case (Mozilla v. FCC) against FCC in August this year to defend the net neutrality rules. Mozilla in its reply to this case said that rolling back the rules was totally an unethical move by FCC: “The FCC’s removal of net neutrality rules is not only bad for consumers, it is also unlawful. The protections in place were the product of years of deliberation and careful fact-finding that proved the need to protect consumers, who often have little or no choice of internet provider. The FCC is simply not permitted to arbitrarily change its mind about those protections based on little or no evidence.” This case advocates a consumer’s rights to access content and services online without any involvement of ISPs in blocking, throttling, or discriminating against consumers’ favorite services. Following are the arguments Mozilla is making against FCC’s decision of repealing the open internet rules: “The FCC order fundamentally mischaracterizes how internet access works. Whether based on semantic contortions or simply an inherent lack of understanding, the FCC asserts that ISPs simply don’t need to deliver websites you request without interference. The FCC completely renounces its enforcement ability and tries to delegate that authority to other agencies but only Congress can grant that authority, the FCC can’t decide it’s just not its job to regulate telecommunications services and promote competition. The FCC ignored the requirement to engage in a “reasoned decision making” process, ignoring much of the public record as well as their own data showing that consumers lack competitive choices for internet access, which gives ISPs the means to harm access to content and services online.” You can read more about the case Mozilla v. FCC. Read Mozilla’s reply to this case on its official website. US Supreme Court ends the net neutrality debate by rejecting the 2015 net neutrality repeal allowing the internet to be free and open again Spammy bots most likely influenced FCC’s decision on net neutrality repeal, says a new Stanford study The U.S. Justice Department sues to block the new California Net Neutrality law
Read more
  • 0
  • 0
  • 11027

article-image-microsoft-announces-official-support-for-windows-10-to-build-64-bit-arm-apps
Prasad Ramesh
19 Nov 2018
2 min read
Save for later

Microsoft announces official support for Windows 10 to build 64-bit ARM apps

Prasad Ramesh
19 Nov 2018
2 min read
Last week Microsoft announced that developers using Visual Studio now have access to officially supported SDK and tools for creating 64-bit ARM (ARM64) apps. The Microsoft Store is now also accepting submissions for apps built for the ARM64 architecture. Lenovo and Samsung are coming up with new Windows 10 ARM devices featuring the Qualcomm Snapdragon 850 chip. An x86 emulation layer lets these devices run Windows applications. Developers can use Visual Studio 15.9 to recompile apps both on UWP and C++ Win32. These apps can run natively on ARM devices running Windows 10. Running natively allows the applications to take complete advantage of the processing power and capabilities of Windows 10. This results in the best possible experience for users. Instructions to enable Windows 10 64-bit ARM apps support You need to update your Visual Studio to version 15.9. Ensure that you have installed the individual component “Visual C++ compilers and libraries for ARM64” if you plan to build ARM64 C++ Win32 apps. ARM64 will be seen as an available build configuration after updating for new UWP projects. For existing projects and C++ Win32 projects, an ARM configuration needs to be added to the project. This can be done via the Configuration properties in Configuration Manager. Add a new Active solution platform and name it ARM64. Then copy the settings from ARM or x64 and check the box to Create new project platforms. Hitting build should ready the ARM binaries. You can use remote debugging to debug your app. This is fully supported on ARM64. You can alternatively create a package for sideloading or directly copy binaries to run the app. The Windows Store is now accepting ARM64 UWP apps, both on C++ and .NET Native. You can also use the Desktop Bridge to wrap ARM64 binaries into a package to submit to the Windows Store. You can also host dedicated ARM64 versions of Win32 apps on your own website or integrate ARM64 into existing multi-architecture installers. For more instructions, visit the Windows Blog. Another bug in Windows 10 October update that can cause data loss Microsoft announces .NET standard 2.1 Microsoft bring an open-source model of Component Firmware Update (CFU) for peripheral developers
Read more
  • 0
  • 0
  • 13699

article-image-the-packt-top-10-for-10
Packt Editorial Staff
19 Nov 2018
5 min read
Save for later

The Packt top 10 for $10

Packt Editorial Staff
19 Nov 2018
5 min read
Right now, every eBook and every video is just $10 each on the Packt store. Need somewhere to get started? Here’s our Black Friday top ten for just $10. Deep Reinforcement Learning Hands-On Reinforcement learning is the hottest topic in the area of AI research. The technique allows a machine learning agent grow through trial and error in an interactive environment. Just like a human, it builds its intelligence and understanding by learning from its experiences. In Deep Reinforcement Learning Hands-On, expert author Maxim Lapan reveals the reinforcement learning methods responsible for paradigm-shifting AI such as Google’s AlphaGo Zero. Filling the gaps between theory and practice, this book is focused on practical insight on how reinforcement learning works - hands-on! Find out more. The Modern C++ Challenge “I would recommend this to anyone” ★★★★ Amazon Review Take on the modern C++ challenge! Designed to hone and test your C++ skills, The Modern C++ Challenge consists of a stack of programming problems for developers of all levels. These problems don’t just test your knowledge of the language, but your skill as a programmer. Think outside the box to come up with the answers, and don’t worry. If you’re ever stumped, we've got the best solutions to the problems right in the book. So are you up for the challenge? Learn more. Angular 6 for Enterprise-Ready Web Applications The demands of modern business for powerful and reliable web applications is huge. In Angular 6 for Enterprise-Ready Web Applications, software development expert and conference speaker Doguhan Uluca takes you through a hands-on and minimalist approach to designing and architecting high quality Angular apps. More than just a technical manual, this book introduces Enterprise-level project delivery methods. Use Kanban to focus on value delivery, communicate design ideas with mock-up tools and build great looking apps with Angular Material. Find out more. Mastering Blockchain - Second Edition “I love this book and have recommended it to everyone I know who is interested in Blockchain. I also teach Blockchain at the graduate school level and have used this book in my course development and teaching...quite simply, there is nothing better on the market.” ★★★★★ Amazon Review 2018 has been the year that Blockchain and cryptocurrency hit the mainstream. Fully updated and revised from the bestselling first edition, Mastering Blockchain is dedicated to showing you how to put this revolutionary technology into implementation in the real world. Develop Ethereum applications, discover Blockchain for business frameworks, build Internet of Things apps using Blockchain - and more. The possibilities are endless. Find out more. Mastering Linux Security and Hardening Network engineer or systems administrator? You need this book. In one 378 page volume, you’ll be equipped with everything you need to know to deliver a Linux system that’s resistant to being hacked. Fill your arsenal with security techniques including SSH hardening, network service detection, setting up firewalls, encrypting file systems, and protecting user accounts. When you’re done, you’ll have a fortress that will be much, much harder to compromise. Find out more. Mastering Go The CEO of Shopify famously said “Go will be the server language of the future.” Mastering Go shows you how to deliver on that promise. Take your Go skills beyond the basics and learn how to integrate them with production code. Filled with details on the interplay of systems and networking code, Mastering Go will get you writing server-level code that plays well in all environments. Learn more. Mastering Machine Learning Algorithms From financial trading to your Netflix recommendations, machine learning algorithms rule modern life. But whilst each algorithm is often a highly-prized secret, all are often built upon a core algorithmic theory. Mastering Machine Learning Algorithms is your complete guide to quickly getting to grips with popular machine learning algorithms. You will be introduced to the most widely used algorithms in supervised, unsupervised, and semi-supervised machine learning, and will learn how to use them in the best possible manner. If you are looking for a single resource to study, implement, and solve end-to-end machine learning problems and use-cases, this is the book you need. Find out more. Learn Qt 5 Cross-platform development is a big promise. Qt goes beyond the basics of ‘runs on Android and iOS’ or ‘works on Windows and Linux’. If you build your app with Qt it’s truely cross-platform, offering intuitive and easy GUIs for everything from mobile and desktop, to Internet of Things, automotive devices and embedded apps. Learn Qt 5 gives hands-on coverage of the suite of essential techniques that will empower you to progress from a blank page to shipped Qt application. Write your Qt application once, then deploy it to multiple operating systems with ease. Learn more. Microservice Patterns and Best Practices Microservices empower your organization to deliver applications continuously and with agility. But the proper architecture of microservices-based applications can be tricky. Microservice Patterns and Best Practices show you the absolute best way to build and structure your microservices. Start making the right choices at the application development stage, and learn how to cut your monolithic app down into manageable chunks. Find out more. Natural Language Processing with TensorFlow In Natural Language Processing with TensorFlow, chief data scientist Thushan Ganegedara unravels the complexities of natural language processing. An expert on working with untested data, Thushan gives you invaluable tools to tackle immense and unstructured data volumes. Processing your raw corpus is key to effective deep learning. Let Thushan show you how with NLP and Python’s most popular deep learning library. Learn more.
Read more
  • 0
  • 0
  • 4161

article-image-linux-4-20-kernel-slower-than-its-previous-stable-releases-spectre-flaw-to-be-blamed-according-to-phoronix
Melisha Dsouza
19 Nov 2018
3 min read
Save for later

Linux 4.20 kernel slower than its previous stable releases, Spectre flaw to be blamed, according to Phoronix

Melisha Dsouza
19 Nov 2018
3 min read
On the 4th of November, Linux 4.20 rc-1 was released with a host of notable changes right from AMD Vega 20 support getting squared away, AMD Picasso APU support, Intel 2.5G Ethernet support, the removal of Speck, and other new hardware support additions and software features. The release that was supposed to upgrade the kernel’s performance, did not succeed in doing so. On the contrary, the kernel is much slower as compared to previous Linux kernel stable releases. In a blog released by Phoronix, Michael Larabel,e lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org, discussed the results of some tests conducted on the kernel. He bisected the 4.20 kernel merge window to explore the reasons for the significant slowdowns in the kernel for many real-world workloads. The article attributes this degrade in performance to the Spectre Flaws in the processor. In order to mitigate against the Spectre flaw, an intentional kernel change was made.The change is termed as  "STIBP" for cross-hyperthread Spectre mitigation on Intel processors. Single Thread Indirect Branch Predictors (STIBP) prevents cross-hyperthread control of decisions that are made by indirect branch predictors. The STIBP addition in Linux 4.20 will affect systems that have up-to-date/available microcode with this support and where a user’s CPU has Hyper-Threading enabled/present. Performance issues in Linux 4.20 Michael has done a detailed analysis of the kernel performance and here are some of his findings. Many synthetic and real-world tests showed that the Intel Core i9 performance was not upto the mark. The Rodinia scientific OpenMP tests took 30% longer, Java-based DaCapo tests taking up to ~50% more time to complete, the code compilation tests also extended in length. There was lower PostgreSQL database server performance and longer Blender3D rendering times. All this was noticed in Core i9 7960X and Core i9 7980XE test systems while the AMD Threadripper 2990WX performance was unaffected by the Linux 4.20 upgrade. The latest Linux kernel Git benchmarks also saw a significant pullback in performance from the early days of the Linux 4.20 merge window up through the very latest kernel code as of today. Those affected systems included a low-end Core i3 7100 as well as a Xeon E5 v3 and Core i7 systems. The tests conducted found the  Smallpt renderer to slow down significantly PHP performance took a major dive, HMMer also faced a major setback compared to the current Linux 4.19 stable series. What is surprising is that there are mitigations against Spectre, Meltdown, Foreshadow, etc in Linux 4.19 as well. But 4.20 shows an additional performance drop on top of all the previously outlined performance hits this year. In the entire testing phase, the AMD systems didn’t appear to be impacted. This would mean if a user disables Spectre V2 mitigations to account for better performance- the system’s security could be compromised. You can head over to Phoronix for a complete analysis of the test outputs and more information on this news. Soon, RHEL (Red Hat Enterprise Linux) won’t support KDE Red Hat releases Red Hat Enterprise Linux 8 beta; deprecates Btrfs filesystem The Ceph Foundation has been launched by the Linux Foundation to support the open source storage project
Read more
  • 0
  • 0
  • 15928
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-tulsa-wants-to-become-the-hub-for-remote-tech-workers-offers-10000-cash-benefits-co-working-space-and-furnished-apartments
Sugandha Lahoti
19 Nov 2018
4 min read
Save for later

Tulsa wants to become the hub for remote tech workers: offers $10,000 cash benefits, co-working space, and furnished apartments

Sugandha Lahoti
19 Nov 2018
4 min read
Tulsa, Oklahoma has designed a special state-backed Tulsa remote program offering $10,000 grants to eligible applicants who commit to living in the city for a year and working remotely. The Tulsa remote program aims to draw tech workers and creatives to join and help the growth of the Tulsa community. The program is being carried out in partnership with the George Kaiser Family Foundation, a group that works to tackle pressing issues in the Tulsa community for community building, collaboration and networking in the city. Applicants should meet four eligibility requirements: They can move to Tulsa within 6 months. They have full-time remote employment or are self-employed outside of Tulsa County. They are 18+ years old. They are eligible to work in the United States. Eligible workers will receive additional benefits like a $10,000 cash benefit. Money from each grant will be distributed throughout the course of one year. Participants will receive an initial $2,500 for relocation expenses, a $500 monthly stipend and a final payout of $1,500 once the program is completed. Workers will be provided a co-working space in 36 Degrees North, in Downtown Tulsa, to help them collaborate with other local entrepreneurs, remote workers, and digital nomads. The co-working space will offer complimentary snacks and beverages, as well as monthly meetups and workshops with fellow members and Tulsa entrepreneurs. Participants can live in the City of Tulsa or in the county. They will also be provided with the option of living in new, fully-furnished apartments in the heart of the Tulsa Arts District at a discounted price. The Tulsa community will cover utilities for the first three months. The program will start with small groups of around 10 to 15 people at a time. The city hopes to have up to 300 remote workers in the program. According to Ken Levit, an executive director at the George Kaiser Family Foundation, “While the program seeks strong workers from the tech sector, it also hopes to draw a broad array of dynamic and talented applicants, such as corporate recruiters, researchers, and writers.” With this move, Tulsa is all set to join a growing list of U.S. cities ( Baltimore, Maryland; St. Clair County, Michigan; and Marquette, Kansas) all looking to attract young professionals. When Amazon invited North American cities to apply to compete for getting Amazon’s second headquarters in their cities, Tulsa also applied. Tulsa’s proposal for Amazon HQ2 was submitted by the Tulsa Regional Chamber and City and community leaders offered guidance to put forth a strong and competitive proposal that would be a direct reflection of the community. Their pitch included a strong focus on community culture, labor, public investment initiative, site location, and transportation. “While it’s disappointing that Tulsa didn’t make the shortlist for Amazon HQ2, the effort was certainly worthwhile,” Mike Neal, president, and CEO of the Tulsa Regional Chamber said. “Pitching for one of the largest headquarters projects in the world taught us a great deal about Tulsa’s competitive strengths.” Although not chosen, Tulsa’s (and other similar cities) plans highlights the fact that non-tech cities are fighting neck-to-neck to attract future investments and collaboration along with growing tech talent in-house. They are pretty serious about this process and many state/province and local communities are coming with lucrative incentives to get big businesses and talents to their cities. University of Washington Associate Professor of History, Margaret O’Mara warned the cities before the HQ2 results. “I would caution that as exciting as the possibility of being the site that lands the Amazon HQ might be for cities, I would want them to think about what the bigger tradeoffs might be if they are putting together a package to try and lure them to town.” Apply here if you’re interested in the Tulsa remote program. You can also get information and other resources at info@tulsaremote.com. Amazon splits HQ2 between New York and Washington, D.C. after a making 200+ states compete over a year; public sentiments largely negative. 15 million jobs in Britain at stake with Artificial Intelligence robots set to replace humans at the workforce. Mozilla funds winners of the 2018 Creative Media Awards for highlighting unintended consequences of AI in society
Read more
  • 0
  • 0
  • 1427

article-image-oracles-thomas-kurian-to-replace-diane-greene-as-google-cloud-ceo-is-this-googles-big-enterprise-cloud-market-move
Melisha Dsouza
19 Nov 2018
4 min read
Save for later

Oracle’s Thomas Kurian to replace Diane Greene as Google Cloud CEO; is this Google’s big enterprise Cloud market move?

Melisha Dsouza
19 Nov 2018
4 min read
On 16th November, CEO of Google Cloud, Diane Greene, announced in a blog post that she will be stepping down from her post after 3 years of running the Google Cloud. The position will now be taken up by Thomas Kurian, who worked at Oracle for the past 22 years. Kurian will be joining Google Cloud on November 26th and transitioning into the Google Cloud leadership role in early 2019, while Diane works as a CEO till end of January 2019. Post that, she will continue as a Director on the Alphabet board. Google Cloud led by Diane Greene Diane Greene has been leading Google’s cloud computing division since early 2016. She has been considered to be Google’s best bet on being the second largest source of revenue while competing with Amazon and Microsoft in providing computing infrastructure for businesses. However, there are speculations that this decision indicates the said project hasn’t gone as well as planned. Although the cloud division has seen notable advances under the leadership of Greene, Amazon and Microsoft have stayed a step ahead in their cloud businesses.  According to Canalys, Amazon has roughly a third of the global cloud market, which contributes more to revenue than its sales on Amazon.com. Microsoft has roughly half of Amazon’s market share, and currently owns 8 percent of the Global market share of cloud infrastructure services. Maribel Lopez, of Lopez Research states “When Diane Greene came in they had a really solid chance of being the No. 2 provider, Microsoft has really closed the gap and is the No. 2 provider for most enterprise customers by a significant margin.” Greene acquired customers such as Twitter, Target, and HSBC for Google cloud. Major Fortune 1000 enterprises depend on Google Cloud for their future on. Under her leadership, Google established a training and professional services organization and Google partner organizations. They have come up with ways to help enterprises adopt AI through their Advanced Solutions Lab. Google’s industry verticals has achieved massive traction in health, financial services, retail, gaming and media, energy and manufacturing, and transportation. Along with the Cloud ML and the Cloud IoT groups, they acquired Apigee, Kaggle, qwiklabs and several promising small startups. She oversaw projects like creating custom chips for machine learning, thus gaining traction for artificial intelligence used on the platform. While the AI- centric approach bought Google in the limelight, Meaghan McGrath, who tracks Google and other cloud providers at Technology Business Research, says that “They’ve been making the right moves and saying the right things, but it just hasn’t shown through in performance financially,” She further stresses on the fact that Google is still hamstrung by a perception that it doesn’t really know how to work with corporate IT departments—an area where Microsoft has made its mark. Kurian to join Google Thomas Kurian worked at Oracle for the past 22 years and since 2015 was the president of product development.  On September 5th, Kurian told employees in an email on Sept. 5 that he was taking "extended time off from Oracle". The company said in a statement at the time that "we expect him to return soon.” 23 days later, Oracle put out a filing saying that Kurian had resigned "to pursue other opportunities." Google and Oracle did not have a pleasant history together. The two companies are involved in a eight-year legal battle related to Google's use of the Java programming language, without a license, in developing its Android operating system for smartphones. Oracle owns the intellectual property behind Java. In March, the Federal Circuit reversed a district court's ruling that had favored Google, sending the case back to the lower court to determine damages that it now must pay Oracle. CNBC reports that one former Google employee, who asked not to be named because of the sensitivity of the matter, is not optimistic that Kurian will be well received; since Kurian still has to figure out how to work with Googlers. It would be interesting to see how the face of Google Cloud changes under Kurian’s leadership. You can head over to Google’s blog to read more about this announcement. Intel Optane DC Persistent Memory available first on Google Cloud Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more 10 useful Google Cloud AI services for your next machine learning project [Tutorial]
Read more
  • 0
  • 0
  • 10037

article-image-github-octoverse-the-top-programming-languages-of-2018
Prasad Ramesh
19 Nov 2018
4 min read
Save for later

GitHub Octoverse: The top programming languages of 2018

Prasad Ramesh
19 Nov 2018
4 min read
After the GitHub Octoverse report last month, GitHub released an analysis of the top programming languages of 2018 on its platforms. There are various ways to rank the popularity of a programming language. In the report published on the GitHub Blog, the number of unique contributors to both public and private repositories tagged with the primary language was used. In addition, the number of repositories tagged with the appropriate primary programming language was also used. JavaScript is the top programming language by repositories The most number of repositories are created in JavaScript. The number of repositories created has a steady rise from 2012. Around this time, GitHub was housing nearly 1 million repositories in total. New JavaScript frameworks like Node.js were launched in 2009. This made it possible for developers to create client and server sides with the same code. Source: GitHub Blog JavaScript also has the most number of contributors JavaScript tops the list for the language having the most number contributors in public and private repositories. This is the case for organizations of every size in all regions of the world. New languages have also been on the rise on GitHub. In 2017, TypeScript entered the top 10 programming languages for all kinds of repositories across all regions. Projects like DefinitelyTyped help in using common JavaScript libraries with TypeScript which encourages its adoption. Some languages have also seen a decline in popularity. Ruby has sunk in the charts over the last couple of years. Even though the number of contributors in Ruby is on the rise, other languages like JavaScript and Python have grown faster. Newer projects not likely to be written in Ruby. This is especially true for projects owned by individual users or small organizations. Such projects are likely written in popular languages like JavaScript, Java, or Python. Source: GitHub Blog Languages by contributors in different regions Across regions, there haven’t been many variations in languages used. Ruby is at the bottom for all regions. TypeScript ranks higher in South America and Africa compared to North America and Europe. The reason could be the developer communities being relatively new in Africa and South America. The repositories in Africa and South America were younger than the repositories in North America and Europe. Fastest growing language by contributors PowerShell is climbing the list. Go also continues to grow across repository type with rank 7. It’s rank is 9 for open source repositories. Statically-typed languages which focus on type safety and interoperability like Kotlin, TypeScript, and Rust are growing quickly. So what makes a programming language popular on GitHub? There are three factors for top programming languages to climb ranks—type safety, interoperability, and being open source. Type safety: There’s a rise in static typing except for Python. This is because of the security and efficiency static typing offers individual developers and teams. The optional static typing in TypeScript adds safety. Kotlin, offers greater interactivity while creating trustworthy, type-safe programs. Interoperability: One of the reasons TypeScript climbed the rankings was due to its ability to coexist and integrate with JavaScript. Rust and Kotlin which are also on the rise, find built-in audiences in C and Java, respectively. Python developers can directly call Python APIs from Swift which displays its versatility and interoperability. Open source: These languages are also open source projects with active commits and changes. Strong communities that contribute, evolve, and create resources for languages can positively impact its life. For more details and charts, visit the GitHub Blog. What we learnt from the GitHub Octoverse 2018 Report Why does the C programming language refuse to die? Julia for machine learning. Will the new language pick up pace?
Read more
  • 0
  • 0
  • 15505

article-image-introducing-cycle-js-a-functional-and-reactive-javascript-framework
Bhagyashree R
19 Nov 2018
3 min read
Save for later

Introducing Cycle.js, a functional and reactive JavaScript framework

Bhagyashree R
19 Nov 2018
3 min read
Cycle.js is a functional and reactive JavaScript framework for writing predictable code. The apps built with Cycle.js consist of pure functions, which means it only takes inputs and generates predictable outputs, without performing any I/O effects. What is the basic concept behind Cycle.js? Cycle.js considers your application as a pure main() function. It takes inputs that are read effects (sources) from the external world and gives outputs that are write effects (sinks) to affect the external world. Drivers like plugins that handle DOM effects, HTTP effects, etc are responsible for managing these I/O effects in the external world. Source: Cycle.js The main() is built using Reactive Programming primitives that maximize separation of concerns and provides a fully declarative way of organizing your code. The dataflow in your app is clearly visible in the code, making it readable and traceable. Here are some of its properties: Functional and reactive As Cycle.js is functional and reactive, it allows developers to write predictable and separated code. Its building blocks are reactive streams from libraries like RxJS, xstream or Most.js. These greatly simplify code related to events, asynchrony, and errors. This application structure also separates concerns as all dynamic updates to a piece of data are co-located and impossible to change from outside. Simple and concise This framework is very easy to learn and get started with as it has very few concepts. Its core API has just one function, run(app, drivers). Apart from that, we have streams, functions, drivers, and a helper function to isolate scoped components. Its most of the building blocks are just JavaScript functions. Functional reactive streams are able to build complex dataflows with very few operations, which makes apps in Cycle.js very small and readable. Extensible and testable In Cycle.js, drivers are simple functions that take messages from sinks and call imperative functions. All I/O effects are done by the drivers, which means your application is just a pure function. This makes it very easy to swap the drivers around. Currently, there are drivers for React Native, HTML5 Notification, Socket.io, and so on. Also, with Cycle.js, testing is just a matter of feeding inputs and inspecting the output. Composable As mentioned earlier, a Cycle.js app, no matter how complex it is, is a function that can be reused in a larger Cycle.js app. Sources and sinks in these apps act as interfaces between the application and the drivers, but they are also the interface between a child component and its parent. Its components are not just GUI widgets like in other frameworks. You can make Web Audio components, network requests components, and others since the sources/sinks interface is not exclusive to the DOM. You can read more about Cycle.js on its official website. Introducing Howler.js, a Javascript audio library with full cross-browser support npm at Node+JS Interactive 2018: npm 6, the rise and fall of JavaScript frameworks, and more InfernoJS v6.0.0, a React-like library for building high-performance user interfaces, is now out
Read more
  • 0
  • 0
  • 17216
article-image-evan-you-shares-vue-3-0-updates-at-vueconf-toronto-2018
Bhagyashree R
16 Nov 2018
3 min read
Save for later

Evan You shares Vue 3.0 updates at VueConf Toronto 2018

Bhagyashree R
16 Nov 2018
3 min read
VueConf Toronto 2018 commenced on November 14th. This a three-day event starting from November 14 to 16. One of the speakers at the event was Evan You, the creator of Vue.js who shared what to expect from the yet to be released Vue 3.0. https://twitter.com/Ionicframework/status/1063244741343629313 Following are some of the updates that were announced at the conference: Faster and maintainable code architecture Vue 3.0 is re-written from the ground up to make its architecture cleaner and more maintainable. To provide better speed some internal functionalities are broken into individual packages in order to isolate the scope of complexity. We can expect 100% faster mounting and patching with this release. Improved slots mechanism Now all compiler-generated slots are functions and invoked during the child component’s render call. The dependencies in slots are collected as dependencies of the child instead of the parent. When slot content changes, only the child is re-rendered. And if the parent re-renders, the child does not have to if its slot content did not change. This change prevents useless re-renders by offering even more precise change detection at the component tree level. Proxy-based observation mechanism Vue 3.0 will come with a Proxy-based observer implementation that provides reactivity tracking with full language coverage. This will eliminate a number of limitations in the current implementation of Vue 2, which is based on Object.defineProperty: Detection of property addition / deletion Detection of Array index mutation / .length mutation Support for Map, Set, WeakMap and WeakSet Tree-shaking friendly The new codebase is tree-shaking friendly. Features such as built-in components and directive runtime helpers can be imported on-demand and tree-shakable. Tree-shakable features also allow the Vue developers to offer more built-in features in future without incurring payload penalties for users that don’t use them. Easily render-to-native with the Custom Renderer API Developers will be able to create custom renderers with the Custom Renderer API. They no longer need to fork the Vue codebase with custom modifications. This will allow easily keeping the render-to-native projects like Weex and NativeScript Vue to stay up-to-date with upstream changes. This API will also make it trivially easy to create custom renderers for various other purposes. In addition to these improvements, it will come with an experimental Hooks API, better warning traces, experimental time slicing support, supports IE11 and improved TypeScript with TSX. Read more about Vue 3.0 updates from the presentation shared by Evan You. Vue.js 3.0 is ditching JavaScript for TypeScript. What else is new? Vue CLI 3.0 is here as the standard build toolchain behind Vue applications React vs. Vue: JavaScript framework wars
Read more
  • 0
  • 0
  • 13869

article-image-openstack-foundation-to-tackle-open-source-infrastructure-problems-will-conduct-conferences-under-the-name-open-infrastructure-summit
Melisha Dsouza
16 Nov 2018
3 min read
Save for later

OpenStack Foundation to tackle open source infrastructure problems, will conduct conferences under the name ‘Open Infrastructure Summit’

Melisha Dsouza
16 Nov 2018
3 min read
At the OpenStack Summit in Berlin this week, the OpenStack Foundation announced that from now all its bi-annual conferences will be conducted under the name of ‘Open Infrastructure Summit’. According to TechCrunch, the  Foundation itself won’t have a rebranding of its name, but a change will be brought about in the nature of what the Foundation is doing. The board will now be adopting new projects outside of the core OpenStack project. There will also be a process for adding “pilot projects” and fostering them for a minimum of 18 months. The focus for these projects will be on continuous integration and continuous delivery (CI/CD), container infrastructure, edge computing, data center, and artificial intelligence and machine learning. OpenStack currently has these pilot projects in development: Airship, Kata Containers, StarlingX and Zuul. OpenStack says that the idea of the foundation is not to manage multiple projects, or increase the Foundation’s revenue. However, the scope of this idea is focused around people who run or manage infrastructure. There are no new boards of directors or foundations for each project. The team also assures its members that the actual OpenStack technology isn’t going anywhere. OpenStack Foundation CTO Mark Collier said “We said very clearly this week that open infrastructure starts with OpenStack, so it’s not separate from it. OpenStack is the anchor tenant of the whole concept,” Collier said. Sell added, “All that we are doing is actually meant to make OpenStack better.” Adding his insights on the decision, Canonical founder Mark Shuttleworth is worried that the focus on multiple projects will “confuse people about OpenStack.” he further adds that “I would really like to see the Foundation employ the key contributors to OpenStack so that the heart of OpenStack had long-term stability that wasn’t subject to a popularity contest every six months,” Boris Renski, co-founder of OpenSTack stated that as of today a number of companies are back to doubling down on OpenStack as their core focus. He attributes this to the foundation’s focus on edge computing. The highest interest in OpenStack being shown by China. The OpenStack Foundation’s decision to tackle open source infrastructure problems, while keeping the core of the actual OpenStack project intact, is refreshing. The only possible competition it can face is from the Linux Foundation backing the Cloud Native Computing Foundation. Read Next OpenStack Rocky released to meet AI, machine learning, NFV and edge computing demands for infrastructure Introducing VMware Integrated OpenStack (VIO) 5.0, a new Infrastructure-as-a-Service (IaaS) cloud Introducing OpenStack Foundation’s Kata Containers 1.0
Read more
  • 0
  • 0
  • 9475

article-image-launching-soon-my-business-app-on-google-maps-that-will-let-you-chat-with-businesses-on-the-go
Amrata Joshi
16 Nov 2018
2 min read
Save for later

Launching soon: My Business App on Google Maps that will let you chat with businesses, on the go

Amrata Joshi
16 Nov 2018
2 min read
Google will soon roll out a new feature on Google Maps that will let users, message business profiles in nearby locations. This will help them with the opportunity to ask questions while looking for things to do and places to go or shop. Last year Google enabled users in selected countries to message businesses from the Business Profiles on Google. On Wednesday, Aditya Tendulkar, Product Manager at Google Maps, wrote in a blog post, “You’ll see your messages with the businesses you connect with via ‘Business Profiles’ in the side menu, within the Google Maps app on both, Android as well as iOS devices.” https://lh6.googleusercontent.com/xo1gT3uII6iLenY3z8kh7_iv-FD3cnfTBmAYFm2mXksJQvpheSEti51Tonhf7I8xXJtAUwfuldDD7BoaDpMHdFUnLfmaF78thSzrqsp0bl45nILcCh-YGKy-JA32dKckq09wHJvV In order to accept messages from users, the local businesses will have to install the My Business App and enable messages. If one tries to reach out to a shop for some questions which the website isn’t answering, then one can simply message. It’s always easy to drop a message while traveling. Users just need to look for the “message” button on Business Profiles on Search and Maps to message. Also, users in countries worldwide would be able to chat with businesses for the very first time. The other advantage of My Business App is that it is a free tool. It will also help small business owners reach more people online and connect with their customers through Google. This will, in turn, help them in escalating the growth of their ventures. It could possibly work like the Facebook page where it is easy to message businesses. Integrating My Business app within Maps is better as there is no need for a separate messaging app. https://twitter.com/verge/status/1055489488858869761 Google’s messaging platforms haven’t really worked well in past. Hangouts and Allo are not much used by people. Also, the RCS Chat hasn’t launched in the US across all major carriers. It would be interesting to see the fate of My Business App. Read more about this new on Google’s official blog. Google’s Pixel camera app introduces Night Sight to help click clear pictures with HDR+ Google makes major inroads into healthcare tech by absorbing DeepMind Health Monday’s Google outage was a BGP route leak: traffic redirected through Nigeria, China, and Russia
Read more
  • 0
  • 0
  • 11515
article-image-mozilla-shares-why-firefox-63-supports-web-components
Bhagyashree R
16 Nov 2018
3 min read
Save for later

Mozilla shares why Firefox 63 supports Web Components

Bhagyashree R
16 Nov 2018
3 min read
Mozilla’s Firefox 63 comes with support for two Web Components: Custom Elements and Shadow DOM. Yesterday, Mozilla shared how these new capabilities and resources are helping web developers to create reusable and modular code. What are Web Components? Web components is a suite of web platform APIs that allow you to create new custom, reusable, and encapsulated HTML tags to use in web pages and web apps. Custom components and widgets built on the Web Component standards work across modern browsers and can be used with any JavaScript library or framework that works with HTML. Let’s discuss the two tent pole standards of Web Components v1: Custom Elements Custom Elements, as the name suggests, allows developers to create “customized” HTML tags. With Custom Elements, web developers can create new HTML tags, improve existing HTML tags, or extend the components created by other developers. It provides developers a web standards-based way to create reusable components using nothing more than vanilla JS/HTML/CSS. To prevent any future conflicts, all Custom Elements must contain a dash, for example, my-element. The following are the power Custom Elements provides: 1. Earlier, browsers didn’t allow extending the built-in HTMLElement class or its subclasses. You can now do that with Custom Elements. 2. For the existing tags such as a p tag, the browser is aware to map it with the HTMLParagraphElement class. But what happens in the case of Custom Elements? In addition to extending built-in classes, we now have a Custom Element Registry for declaring this mapping. It is the controller of custom elements on a web document, allowing you to register a custom element on the page, return information on what custom elements are registered, and so on. 3. Additional lifecycle callbacks such as connectedCallback, disconnectedCallback, and attributeChangeCallback are added for detecting element creation, insertion to the DOM, attribute changes, and more. Shadow DOM Shadow DOM gives you an elegant way to overlay the normal DOM subtree with a special document fragment that contains another subtree of nodes. It introduces a concept of shadow root. A shadow root has standard DOM methods, and can be appended to as any other DOM node but is rendered separately from a document's main DOM tree. Shadow DOM introduces scoped styles to the web platform. It allows you to bundle CSS with markup, hide implementation details, and author self-contained components in vanilla JavaScript without needing any tools or adhering to naming conventions. The underlying concept of Shadow DOM It is similar to the regular DOM, but differs in two ways: How it's created/used How it behaves in relation to the rest of the page Normally, DOM nodes are created and appended as children of another element. Using shadow DOM, you can create a scoped DOM tree that's attached to the element, but separate from its actual children. This scoped subtree is called a shadow tree. The element to which the shadow tree is attached to is called shadow host. Anything that is added in the shadows becomes local to the hosting element, including <style>. This is how CSS style scoping is achieved by the Shadow DOM. Read more in detail about Web Components on Mozilla’s website. Mozilla introduces new Firefox Test Pilot experiments: Price Wise and Email tabs Mozilla shares how AV1, the new open source royalty-free video codec, works This fun Mozilla tool rates products on a ‘creepy meter’ to help you shop safely this holiday season
Read more
  • 0
  • 0
  • 12420

article-image-node-v11-2-0-released-with-major-updates-in-timers-windows-http-parser-and-more
Amrata Joshi
16 Nov 2018
2 min read
Save for later

Node v11.2.0 released with major updates in timers, windows, HTTP parser and more

Amrata Joshi
16 Nov 2018
2 min read
Yesterday, the Node.js community released Node v11.2.0. This new version comes with a new experimental HTTP parser (llhttp), timers, windows and more. Node v11.1.0 was released earlier this month. Major updates Node v11.2.0 comes with a major update in timers, fixing an issue that could cause setTimeout to stop working as expected. If the node.pdb file is available, a crashing process will now show the names of stack frames This version improves the installer's new stage that installs native build tools. Node v11.2.0 adds prompt to tools installation script which gives a visible warning and a prompt that lessens the probability of users skipping ahead without reading. On Windows, the windowsHide option has been set to false. This will let the detached child processes and GUI apps to start in a new window. This version also introduced an experimental `llhttp` HTTP parser. llhttp is written in human-readable TypeScript. It is verifiable and easy to maintain. This llparser is used to generate the output C and/or bitcode artifacts, which can be compiled and linked with the embedder's program (like Node.js). The eventEmitter.emit() method has been added to v11.2.0. This method allows an arbitrary set of arguments to be passed to the listener functions. Improvements in Cluster The cluster module allows easy creation of child processes for sharing server ports. The cluster module now supports two methods of distributing incoming connections. The first one is the round robin approach which is default on all platforms except Windows. The master process listens on a port, they accept new connections and distribute them across the workers in a round-robin fashion. This approach avoids overloading a worker process. In the second process, the master process creates the listen socket and sends it to interested workers. The workers then accept incoming connections directly. Theoretically, the second approach gives the best performance. Read more about this release on the official page of Node.js. Node.js v10.12.0 (Current) released Node.js and JS Foundation announce intent to merge; developers have mixed feelings low.js, a Node.js port for embedded systems
Read more
  • 0
  • 0
  • 15744
Modal Close icon
Modal Close icon