Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Application Development

279 Articles
article-image-google-mobile-services-agreement-require-oems-to-hide-custom-navigation-system-and-devices-fully-compatible-with-usb-type-c-port
Fatema Patrawala
08 Oct 2019
4 min read
Save for later

Updated Google Mobile Services agreement require OEMs to hide custom navigation system and devices fully compatible with USB Type C port

Fatema Patrawala
08 Oct 2019
4 min read
Yesterday, reports from 9to5 Google says that as per the updated Google Mobile Services (GMS) agreement. Per the new terms, OEMs who utilize their own gesture navigation systems cannot have those available in the device's initial setup if it ships with Android 10. Google has struggled to devise a new navigation system for Android over the last few releases. The two-button design from Pie is not liked much in the market, and the new full-gesture setup in Android 10 also has its critics. However, with the new agreement, you will see a lot of Google's gestures in the upcoming new Android 10 devices. At this year’s Google I/O 2019, the company announced that it would support the new gestures and the three-button navbar going forward. It didn't rule out OEMs having their own custom gesture navigation and will indeed let them keep those, but there will be some restrictions. Notably, devices shipping with Android 10 will need to have either classic three-button nav or Google's gesture navigation enabled out of the box. This makes it sound like the two-button "pill" setup will be effectively dead. Android 10 devices will not offer custom navigation in the initial setup Phones often let users choose their navigation options during setup, but Android 10 will not offer custom gesture navigation as an option in the setup wizard at all. So, you'll probably be able to turn on Google's gestures, but something like Samsung's swipe-up targets (see below image) will only be available if you dig into the settings. Source: 9to5 Google Hence, the updated Google Mobile Services agreement puts into perspective what Google really wants for Android users. Manufacturers can still include their own navigation solutions, but those solutions aren’t to be immediately available to the users during the setup wizard. Users must go into the device settings to toggle alternative navigation systems after the initial setup. Not only are OEM-specific navigation systems not allowed during setup, but manufacturers can’t even prompt users to use them in any way. No notifications. No pop-ups or any other way. Also, Google also requires OEMs to hide their custom navigation systems deeper into the settings. Manufacturers can put these settings under new sections like “advanced” or something similar, not easily accessible to the user. This isn’t necessarily a bad call by Google. More uniformity throughout the Android ecosystem can only be a good thing. The gestures will mature quicker, apps will be forced to adhere to the new navigation systems, and users will get used to it more easily. Google Mobile Services requires new Android devices compatible with Type-C ports The new Google Mobile Services agreement also outlines the technical requirements that smartphone device makers must meet in order to preload Google Mobile Services. Nearly every Android smartphone or tablet sold internationally have met these requirements because having access to Google apps is critical for sales outside of China. A subsection 13.6 of this document is titled “USB Type-C Compatibility” which states: “New DEVICES launching from 2019 onwards, with a USB Type-C port MUST ensure full interoperability with chargers that are compliant with the USB specifications and have the USB Type-C plug.” On Reddit, this news has got significant traction and Android users are discussing that this move by Google is good only if the gesture usage works well. Here are some of the comments, “Im sure people will hate this, but im for easier usage for the general public.” Another user responds, “Sure. As long as the gesture usage works really, really well. If it doesn't, this is a bad move.” Google Project Zero discloses a zero-day Android exploit in Pixel, Huawei, Xiaomi and Samsung devices Google’s DNS over HTTPS encryption plan faces scrutiny from ISPs and the Congress Google Chrome Keystone update can render your Mac system unbootable Google’s V8 JavaScript engine adds support for top-level await Google announces two new attribute links, Sponsored and UGC and updates “nofollow”
Read more
  • 0
  • 0
  • 15567

article-image-github-releases-vulcanizer-a-new-golang-library-for-operating-elasticsearch
Natasha Mathur
06 Mar 2019
2 min read
Save for later

GitHub releases Vulcanizer, a new Golang Library for operating Elasticsearch

Natasha Mathur
06 Mar 2019
2 min read
The GitHub team released a new Go library, Vulcanizer, that interacts with an Elasticsearch cluster, yesterday. Vulcanizer is not a full-fledged Elasticsearch client. However, it is aimed at providing a high-level API to help with common tasks associated with operating an Elasticsearch cluster. These tasks include querying health status of the cluster, migrating data from nodes, updating cluster settings, and more. GitHub makes use of Elasticsearch as the core technology behind its search services. GitHub has already released the Elastomer library for Ruby and they use Elastic library for Go by user olivere. However, the GitHub team wanted a high-level API that corresponded with the common operations on cluster such as disabling allocation or draining the shards from a node. They wanted a library that focused more on the administrative operations and that could be easily used by their existing tooling. Since Go’s structure encourages the construction of composable software, they decided it was a good fit for Elasticsearch. This is because, Elasticsearch is very effective and helps carry out almost all the operations that can be done using its HTTP interface, and where you don’t want to write JSON manually. Vulcanizer is great at getting nodes of a cluster, updating the max recovery cluster settings, and safely adding or removing the nodes from the exclude settings, making sure that shards don’t unexpectedly allocate onto a node. Also, Vulcanizer helps build ChatOps tooling around Elasticsearch quickly for common tasks. GitHub team states that having all the Elasticsearch functionality in their own library, Vulcanizer, helps its internal apps to be slim and isolated. For more information, check out the official GitHub Vulcanizer post. GitHub increases its reward payout model for its bug bounty program   GitHub launches draft pull requests GitHub Octoverse: top machine learning packages, languages, and projects of 2018
Read more
  • 0
  • 0
  • 15549

article-image-pivotal-and-heroku-team-up-to-create-cloud-native-buildpacks-for-kubernetes-and-beyond
Natasha Mathur
04 Apr 2019
3 min read
Save for later

Pivotal and Heroku team up to create Cloud Native Buildpacks for Kubernetes

Natasha Mathur
04 Apr 2019
3 min read
Pivotal Inc., a software and services firm, announced yesterday that it has teamed up with Heroku to create Cloud Native Buildpacks for Kubernetes and beyond. Cloud Native Buildpacks turn source code into production-ready Docker images that are OCI image compatible and is based around the popular Buildpack model. The new project is aimed at allowing developers to get more productive with Kubernetes. The Cloud Foundry Buildpacks team also released a selection of next-gen Cloud Foundry buildpacks that are compatible with the Cloud Native Buildpacks. This will allow users to try buildpacks out on Pivotal Container Service (PKS) and Pivotal Application Service (PAS). https://twitter.com/pivotalcf/status/1113426937685446657 “The project aims to deliver a consistent platform-to-buildpack contract for use in more places. The interface defined by this contract is informed by learnings from maintaining production-grade buildpacks for years at both Pivotal and Heroku” states the Pivotal team. With the new Cloud Native Buildpacks, you can create containers by just pushing the code without using any runtime dependencies. On “cf” pushing the custom code, buildpacks automatically add in the framework dependencies and create an application “droplet” that can be run on the platform. This droplet model allows Cloud Foundry to handle all the dependency updates. Application runtimes can also be updated by pulling in the latest buildpacks and rebuilding a droplet. Cloud Native Buildpacks expand on this idea and build an OCI (Open Container) image, capable of running on any platform.“We believe developers will love the simplicity of this single command to get a production quality container when they prefer not to author and maintain their own Dockerfile”, states the Pivotal team. Other reasons why Cloud Native Buildpacks are a step ahead than traditional buildpacks: Portability through OCI standard. Cloud Native Buildpacks can directly produce the OCI Images from source code. This makes Cloud Native Buildpacks much more portable, making them easy to use with  Kubernetes and Knative. Better modularity. Cloud Native Buildpacks are modular, offering platform operators more control over how developers can build their code during runtime. Speed. Cloud Native Buildpacks build faster because of advanced build caching, layer reuse, and data deduplication. Fast troubleshooting. Cloud Native Buildpacks helps troubleshoot production issues much faster as they can be used in a developer local environment. Reproducible builds. Cloud Native Buildpacks allow reproducible container image builds. What next? Pivotal team states that Cloud Native Buildpacks need some more work for it to be ready for enterprise scenarios. Pivotal is currently exploring adding three new features such as image promotion, operator control, and automated image patching. For image promotion, Pivotal is exploring a build service effective at image updating. This would allow the developers to promote images through environments, and cross PCF foundations. Also, Pivotal is exploring a declarative configuration model which will deliver new images to your registry whenever your configuration falls out of sync. “The best developers strive to eliminate toil from their lives. These engineers figure that if a task doesn’t add value, it should be automated..with Cloud Native Buildpacks, developers can happily remove.. toil from their jobs”, states Pivtol team. For more information, check out the official Pivotal Blog. CNCF accepts Cloud Native Buildpacks to the Cloud Native Sandbox CNCF Sandbox, the home for evolving cloud native projects, accepts Google’s OpenMetrics Project. Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits.
Read more
  • 0
  • 0
  • 15496

article-image-microsoft-announces-net-standard-2-1
Prasad Ramesh
06 Nov 2018
3 min read
Save for later

Microsoft announces .NET standard 2.1

Prasad Ramesh
06 Nov 2018
3 min read
After a year of shipping .NET standard 2.0, Microsoft has now announced .NET standard 2.1 yesterday. In all, 3,000 APIs are planned to be included in .NET standard 2.1 and the progress on GitHub has reached 85% completion at the time of writing. The new features in .NET standard 2.1 are as follows. Span<T> in .NET standard 2.1 Span<T> has been added in .NET Core 2.1. It is an array-like type that allows representing managed and unmanaged memory in a uniform way. Span<T> is an important performance improvement since it allows managing buffers in a more efficient way. It supports slicing without copying and can help in reducing allocations and copying. Foundational-APIs working with spans Span<T> is available as a .NET Standard compatible NuGet package. This package does not help extend the members of .NET Standard types that deal with spans. For example, .NET Core 2.1 added many APIs that allowed working with spans. To add span to .NET Standard some companion APIs were added. Reflection emit added in .NET standard 2.1 In .NET Standard 2.1 Lightweight Code Generation (LCG) and Reflection Emit are added. Two new capability APIs are exposed to allow checking for the ability to generate code at all (RuntimeFeature.IsDynamicCodeSupported). It is also supported if the generated code is interpreted or compiled (RuntimeFeature.IsDynamicCodeCompiled). SIMD There has been support for SIMD for a while now. They have been used to speed up basic operations like string comparisons in the BCL. There have been requests to expose these APIs in .NET Standard as the functionality requires runtime support. This cannot be provided meaningfully as a NuGet package. ValueTask and ValueTask<T> In .NET Core 2.1, the biggest feature was improvements to support high-performance scenarios. This also included making async/await more efficient. ValueTask<T> allows returning results if the operation completed synchronously without having to allocate a new Task<T>. In .NET Core 2.1 this has been improved which made it useful to have a corresponding non-generic ValueTask. This allows reducing allocations even for cases where the operation has to be completed asynchronously. This is a feature that types like Socket and NetworkStream now utilize. By exposing these APIs in .NET Standard 2.1, library authors now benefit from these improvements as a consumer as well as a producer. DbProviderFactories DbProviderFactories wasn’t available for .NET Standard 2.0, now it will be in 2.1. DbProviderFactories allows libraries and applications to make use of a specific ADO.NET provider without knowing any of its specific types at compile time. Other changes Many small features across the base class libraries have been added. These include System.HashCode for combining hash codes or new overloads on System.String. There are roughly 800 new members in .NET Core and all of them are added in .NET Standard 2.1. .NET Framework 4.8 will remain on .NET Standard 2.0. .NET Core 3.0 and the upcoming versions of Xamarin, Mono, and Unity will be updated to implement .NET Standard 2.1. To ensure correct implementation of APIs, a review board is made to sign-off on API additions to the .NET Standard. The board chaired by Miguel de Icaza comprises of representatives from .NET platform, Xamarin and Mono, Unity and the .NET Foundation. There will also be a formal approval process for new APIs. To know more, visit the Microsoft Blog. .NET Core 3.0 and .NET Framework 4.8 more details announced .NET announcements: Preview 2 of .NET Core 2.2 and Entity Framework Core 2.2, C# 7.3, and ML.NET 0.5 What to expect in ASP.NET Core 3.0
Read more
  • 0
  • 0
  • 15478

article-image-google-open-sources-sandboxed-api-a-tool-that-helps-in-automating-the-process-of-porting-existing-c-and-c-code
Amrata Joshi
19 Mar 2019
2 min read
Save for later

Google Open-sources Sandboxed API, a tool that helps in automating the process of porting existing C and C++ code

Amrata Joshi
19 Mar 2019
2 min read
Yesterday, the team at Google open-sourced Sandboxed API, a tool that Google has been using internally for its data centers for years. It is a project for sandboxing C and C++ libraries running on Linux systems. Google has made the Sandboxed API available on GitHub. Sandboxed API helps coders to automate the process of porting their existing C and C++ code in order to run on top of Sandbox2, which is Google's custom-made sandbox environment for Linux operating systems. Sandbox2 has also been open-sourced and is included with Sandboxed API GitHub repository. Christian Blichmann & Robert Swiecki, from Google's ISE Sandboxing team, said, "Many popular software containment tools might not sufficiently isolate the rest of the OS, and those which do, might require time-consuming redefinition of security boundaries for each and every project that should be sandboxed." The idea behind introducing sandboxing The idea behind sandboxing is to prevent bugs from spreading from one process to another, or the underlying operating system and the kernel. Many software projects process data that are externally generated and potentially could be untrusted. For instance, the conversion of user-provided picture files into different formats or executing user-generated software code. In case, a software library that parses such data is complex, then there is a high possibility that it might fall victim to certain types of security vulnerabilities such as memory corruption bugs or other problems related to the parsing logic. These vulnerabilities can have a serious impact on security. In order to overcome these challenges, developers prefer software isolation method known as sandboxing. With the help of sandboxing methods, developers make sure that only resources such as files, networking connections, and other operating system resources are accessible to the code involved in parsing user-generated content. The team plans to have an added support more operating systems and plans to bring Sandboxed API to the Unix-like systems like the BSDs (FreeBSD, OpenBSD) and macOS. Google also aims to bring CMake support to the API. To know more about this news in detail, check out Google’s blog post. Google to be the founding member of CDF (Continuous Delivery Foundation) Google announces the stable release of Android Jetpack Navigation #GooglePayoutsForAll: A digital protest against Google’s $135 million execs payout for misconduct
Read more
  • 0
  • 0
  • 15453

article-image-eclipse-ides-photon-release-will-support-rust
Pavan Ramchandani
29 Jun 2018
2 min read
Save for later

Eclipse IDE’s Photon release will support Rust

Pavan Ramchandani
29 Jun 2018
2 min read
Eclipse Foundation announced the release of Photon release of Eclipse IDE. Also with this release, the community announced the support for Rust language. This support will give a native Eclipse IDE working experience for Rust developers. Eclipse IDE has been known for providing the IDE support and the learning demands for the Rust community. This release marks the thirteenth annual simultaneous release of Eclipse. The important features in the Photon release as follows: Full Eclipse IDE support for building, debugging, running, and packaging Rust applications and giving a good user experience for Rust development. More support for C# for editing and debugging codes, this includes syntax coloring, autocomplete suggestions, diagnostics, and navigation. The Photon release has added some more frameworks to the IDE such as RedDeer (framework for building automated test), Yasson (Java framework for providing binding with JSON documents), JGit (Git for Java), among others. It also comes with some more updates and features for dynamic language toolkit, Eclipse Modeling Framework (EMF), PHP development tools, C/C++ development tools, tools for Cloud Foundry, dark theme and improvement in background color and popup dialogs. Eclipse foundation has also introduced, what they called Language Server Protocol (LSP), with the Photon release. WIth the LSP based release, Eclipse will deliver support for popular and emerging languages in the IDE. With the normal release cycle, LSP will focus on keeping pace with the emerging tools and technologies andon the developers and their commercial needs in their future releases. For more information on the Photon project and contributing to the Eclipse community, you can check out the Eclipse Meetup event. Read more What can you expect from the upcoming Java 11 JDK? Perform Advanced Programming with Rust The top 5 reasons why Node.js could topple Java
Read more
  • 0
  • 0
  • 15370
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-facebook-is-the-new-cigarettes-says-marc-benioff-salesforce-co-ceo-2
Kunal Chaudhari
02 Oct 2018
6 min read
Save for later

“Facebook is the new Cigarettes”, says Marc Benioff, Salesforce Co-CEO

Kunal Chaudhari
02 Oct 2018
6 min read
So, it was that time of the year when Salesforce enthusiasts, thought-leaders, and pioneers gathered around downtown San-Francisco, to attend the annual Dreamforce conference last week. This year marked the 15th anniversary of the Salesforce annual conference with over 100,000 trailblazers flocking towards the bay area. Throughout these years, technological development in the platform has been the focal point of these conferences, but it was different this time around. A lot has happened between the conference that took place in 2017 and now, especially after Facebook’s Cambridge analytica scandal. First Whatsapp’s co-founder Jan Koum parted ways with Facebook, and now the Instagram co-founders have called it quits. Interestingly, Marc Benioff gave an interview to Bloomberg Technology in which he condemned Facebook as the ‘new cigarettes’. To regulate or not to regulate, that is the question Marc Benioff has been a vocal criticizer of the social media platform. Earlier this year, when innovators and tech leaders gathered at the annual World Economic Forum in the Swiss Alps of Davos, Benioff was one of the panelist discussing on the factors of trust in technology where he made certain interesting points. He took the examples of financial industry a decade ago, where bankers were pretty confident that new products like credit default swaps (CDS), and collateralized debt obligation (CDO) would lead to better economic growth but instead it lead to the biggest financial crisis the world had ever seen. Similarly, he argued that Cigarettes were introduced as this great product for pass time, without any background on its adverse effects on health. Well to the cut the story short, the point that Benioff was trying to make is that these industries were able to take advantage of the addictive behavior of humans because of the clear lack of regulation from the governmental bodies. It was only when the regulators became strict towards these sectors and public reforms came into the picture, these products were brought under control. Similarly, Benioff had called for a regulation of companies, on behalf of the recent news linking the Russian interference in the US presidential elections. He urged the CEO’s of companies to take better responsibilities towards their consumers and their products without explicitly mentioning any name. Let’s take a guess, Mark Zuckerberg anyone? While Benioff made a strong case for regulation, the solution seemed to be more politically driven. Rachel Botsman, Visiting Academic and Lecturer at the Saïd Business School, University of Oxford, argued that regulators are not aware of the new decentralized nature of today’s technological platforms. And ultimately who do we want as the arbiters of truth, should it be Facebook, Regulators, or the Users? and where does the hierarchy of accountability lie in this new structure of platforms? The big question remains. The ethical and humane side of technology Fast forward to Dreamforce 2018, with star-studded guest speakers ranging from the former American Vice President Al Gore to Andre Iguodala of the NBA’s Golden State Warriors. Benioff started with his usual opening keynote but this time with a lot of enthusiasm, or as one might say in a full evangelical mode, the message from the Salesforce CEO was very clear, “We are in the fourth industrial revolution”. Salesforce announced plenty of new products and some key strategic business partnerships with the likes of Apple and AWS now joining Salesforce. While these announcements summarized the technological advancements in the platform, his interview with Bloomberg Technology’s Emily Chang was quite opportunistic. The interview started casually with talks of Benioff sharing his job with the new Co-CEO Keith Block. But soon they discussed the news about Instagram founders Kevin Systrom and Mike Krieger leaving the services of parent company Facebook. While Benioff still maintained his position on regulation, he also discussed about the ethics and humane side of technology. The ethics of technology has come under the spotlight in the recent months with the advancements in Artificial intelligence. In order to solve these questions, Benioff said that Salesforce has taken its first step by setting up the “Office of Ethical and Humane Use of Technology” at the Salesforce Tower in San Francisco. At first, this initiative looks like a solid first step towards solving the problem of technology being used for unethical work. But going back to the argument posed by Rachel Botsman, who actually leverages technology to do unethical work? Is it the Company or the consumer? While Salesforce boasts about its stand on the ethics of building a technological system, Marc Benioff is still silent on the question of Salesforce’s ties with the US Customs and Border Protection (CBP) agency, which follows Donald Trump’s strong anti-immigration agenda. Protesters took a stand against this issue during the Salesforce conference and hundreds of employees from Salesforce wrote an open letter to Benioff to cut ties with the CBP. In return, Benioff responded that its contract with CBP does not deal directly with the separation of children at the Mexican borders. One decision at a time Ethics is largely driven by human behavior, while innovators believe that technological advancements should happen regardless of the outcome, it is the responsibility of every stakeholder in the company, be it a developer, an executive, or a customer to take action against unethical work. And with each mistake, companies and CEOs are provided with opportunities to set things right. Take McKinsey & Company for example. The top management consultancy was under fire due to its scandal in the South African government. But when the firm again came under scrutiny with its ties with the CBP of USA, McKinsey’s new managing partner, Kevin Sneader, came out saying that the firm “will not, under any circumstances, engage in any work, anywhere in the world, that advances or assists policies that are at odds with our values.” It’s now time for companies like Facebook and Salesforce to set the benchmark for the future of technology. How far will Facebook go to fix what it broke: Democracy, Trust, Reality SAP creates AI ethics guidelines and forms an advisory panel The Cambridge Analytica scandal and ethics in data science Introducing Deon, a tool for data scientists to add an ethics checklist The ethical dilemmas developers working on Artificial Intelligence products must consider Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms
Read more
  • 0
  • 0
  • 15344

article-image-pull-panda-is-now-a-part-of-github-code-review-workflows-now-get-better
Amrata Joshi
18 Jun 2019
4 min read
Save for later

Pull Panda is now a part of GitHub; code review workflows now get better!

Amrata Joshi
18 Jun 2019
4 min read
Yesterday, the team at GitHub announced that they have acquired Pull Panda for an undisclosed amount, to help teams create more efficient and effective code review workflows on GitHub. https://twitter.com/natfriedman/status/1140666428745342976 Pull Panda helps thousands of teams to work together on the code and further helps in improving their process by combining three new apps including Pull Reminders, Pull Analytics, and Pull Assigner. Pull Reminders: Users can get a prompt in Slack whenever a collaborator needs a review. It facilitates automatic reminders that ensures the pull requests aren’t missed. Pull Analytics: Users can now get real-time insight and make data-driven improvements for creating a more transparent and accountable culture. Pull Assigner: Users can automatically distribute code across their team such that no one gets overloaded and knowledge could be spread around. Pull Panda helps the team to ship faster and gain insight into bottlenecks in the process. Abi Noda, the founder of Pull Panda highlighted the major reasons for starting Pull Panda. According to him, there were two major pain points, the first one was that on fast moving teams, usually pull requests are forgotten which causes delays in the code reviews and eventually delays in shipping new features to the customers. Abi Noda stated in a video, “I started Pull Panda to solve two major pain points that I had as an engineer and manager at several different companies. The first problem was that on fast moving teams, pull requests easily are forgotten about and often slip through the cracks. This leads to frustrating delays in code reviews and also means it takes longer to actually ship new features to your customers.” https://youtu.be/RtZdbZiPeK8 The team built Pull Reminders which is a GitHub app that automatically notifies the team about their code reviews, to solve the above mentioned problem. The second problem was that it was difficult to measure and understand the team's development process for identifying bottlenecks. To solve this issue, the team built Pull Analytics to provide real-time insights into the software development process. It also highlights the current code review workload across the team such that the team knows who is overloaded and who might be available. Also, a lot of customers have discovered that the majority of their code reviews were done by the same set of people on the team. For solving this problem,  the team built Pull Assigner that offers two algorithms for automatically assigning reviewers. First is the Load Balance, which equalizes the number of reviews so everyone on the team does the same number of reviews. The second one is the round robin algorithm that randomly assigns additional reviewers such that knowledge can be spread across the team. Nat Friedman, CEO at GitHub said, “We'll be integrating everything Abi showed you directly into GitHub over the coming months. But if you're impatient, and you want to get started now, I'm happy to announce that all three of the Pull Panda products are available for free in the GitHub marketplace starting today. So we hope you enjoy using Pull Panda and we look forward to your feedback. Goodbye. It's over.” Pull Panda will no longer offer the Enterprise plan. Existing customers of Enterprise plans can continue to use their on-premises offering. All paid subscriptions have been converted to free subscriptions. New users can install Pull Panda for their organizations for free at our website or GitHub Marketplace. The official GitHub blog post reads, “We plan to integrate these features into GitHub but hope you’ll start benefiting from them right away. We’d love to hear what you think as we continue to improve how developers work together on GitHub.” To know more about this news, check out GitHub’s post. GitHub introduces ‘Template repository’ for easy boilerplate code management and distribution Github Sponsors: Could corporate strategy eat FOSS culture for dinner? GitHub Satellite 2019 focuses on community, security, and enterprise          
Read more
  • 0
  • 0
  • 15325

article-image-whats-new-in-visual-studio-1-22
Amarabha Banerjee
10 Apr 2018
3 min read
Save for later

What’s new in Visual Studio 1.22

Amarabha Banerjee
10 Apr 2018
3 min read
Microsoft has released the latest version of Visual Studio 1.22 recently with a few additions and improvements to it. The primary feature that Microsoft has introduced is called “Logpoints”. The idea of Logpoints is very literal - i.e. these are the breakpoints while debugging code and while taking note of these breakpoints, the developers need not stop code execution and can keep a track of events. The primary changes are: Syntax Aware code folding: This feature allows better code folding for CSS, HTML, JSON and Markdown files. This feature ensures that the code folding is not based on indentation but based on code syntax and hence makes the code much more readable and developer friendly. Conversion to ES6 Refactoring: How many times have you thought that a little bit of help while coding would have made your coding experience better? Visual Studio has added this feature in their new release. The code suggest button (an elliptical hover button) will suggest latest ES 6 code snippets and the developers will have the choice to accept it or modify it. A welcome feature for the new and mid-level programmers for sure. Auto Attach to process: This feature provides a lot of help for the Node.js developers. It automatically starts debugging node.js programs and applications the moment you launch them, eliminating the need for a dedicated launcher program. The other important features of the new version are: Cross file error, warning and reference navigation: This helps you to navigate through the different workspaces efficiently. Improved Large File support: This enables faster syntax highlighting and helps in better and larger memory allocation for bigger applications making the overall debugging process faster. Multi-Line links in the terminal: This feature allows developers to hyperlink multiple links spanning across several lines in the editor. Better organization of JavaScript/TypeScript imports: This feature helps programmers to remove unused codes and sort their imports in a more orderly manner. Emmet Wrap preview: This feature provides the live preview for Emmet's wrap with abbreviation functionality. With these new and exciting features, Visual Studio surely is moving towards a more user-friendly and predictive coding platform for the programmers. We will keep a close watch on its future release and share updates on how these releases target better code reusability, importing different codes easily and better-debugging functionality. Read about the full update on the official Visual Studio website. C++, SFML, Visual Studio, and Starting the first game  
Read more
  • 0
  • 0
  • 15305

article-image-a-vulnerability-discovered-in-kubernetes-kubectl-cp-command-can-allow-malicious-directory-traversal-attack-on-a-targeted-system
Amrata Joshi
25 Jun 2019
3 min read
Save for later

A vulnerability discovered in Kubernetes kubectl cp command can allow malicious directory traversal attack on a targeted system

Amrata Joshi
25 Jun 2019
3 min read
Last week, the Kubernetes team announced that a security issue (CVE-2019-11246) was discovered with Kubernetes kubectl cp command. According to the team this issue could lead to a directory traversal in such a way that a malicious container could replace or create files on a user’s workstation.  This vulnerability impacts kubectl, the command line interface that is used to run commands against Kubernetes clusters. The vulnerability was discovered by Charles Holmes, from Atredis Partners as part of the ongoing Kubernetes security audit sponsored by CNCF (Cloud Native Computing Foundation). This particular issue is a client-side defect and it requires user interaction to exploit the system. According to the post, this issue is of high severity and  the Kubernetes team encourages to upgrade kubectl to Kubernetes 1.12.9, 1.13.6, and 1.14.2 or later versions for fixing this issue. To upgrade the system, users need to follow the installation instructions from the docs. The announcement reads, “Thanks to Maciej Szulik for the fix, to Tim Allclair for the test cases and fix review, and to the patch release managers for including the fix in their releases.” The kubectl cp command allows copying the files between containers and user machine. For copying files from a container, Kubernetes runs tar inside the container for creating a tar archive and then copies it over the network, post which, kubectl unpacks it on the user’s machine. In case, the tar binary in the container is malicious, it could possibly run any code and generate unexpected, malicious results. An attacker could use this to write files to any path on the user’s machine when kubectl cp is called, which is limited only by the system permissions of the local user. The current vulnerability is quite similar to CVE-2019-1002101 which was an issue in the kubectl binary, precisely in the kubectl cp command. The attacker could exploit this vulnerability for writing files to any path on the user’s machine. Wei Lien Dang, co-founder and vice president of product at StackRox, said, “This vulnerability stems from incomplete fixes for a previously disclosed vulnerability (CVE-2019-1002101). This vulnerability is concerning because it would allow an attacker to overwrite sensitive file paths or add files that are malicious programs, which could then be leveraged to compromise significant portions of Kubernetes environments.” Users are advised to run kubectl version --client and in case it does not say client version 1.12.9, 1.13.6, or 1.14.2 or newer, then it means the user is running a vulnerable version which needs to be upgraded. To know more about this news, check out the announcement.  Kubernetes 1.15 releases with extensibility around core Kubernetes APIs, cluster lifecycle stability, and more! HAProxy 2.0 released with Kubernetes Ingress controller, layer 7 retries, polyglot extensibility, gRPC support and more Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more!    
Read more
  • 0
  • 0
  • 15287
article-image-github-introduces-project-paper-cuts-for-developers-to-fix-small-workflow-problems-iterate-on-ui-ux-and-find-other-ways-to-make-quick-improvements
Melisha Dsouza
29 Aug 2018
4 min read
Save for later

Github introduces Project Paper Cuts for developers to fix small workflow problems, iterate on UI/UX, and find other ways to make quick improvements

Melisha Dsouza
29 Aug 2018
4 min read
Github has introduced “Project Paper Cuts” that was inspired from a lot of refined GitHub additions. This project aims to fix smaller code related and UI issues that users face during a project development workflow.   Source: Twitter Project Paper Cuts is committed to working directly with the community in order to fix small to medium-sized workflow problems. It aims to improve UI/UX and find ways to make quick improvements to nagging issues that users often encounter in their projects. The project aims to find fixes on issues that have the most impact but are supported with hardly any or no discussions. Most “paper cuts” will have a public changelog entry associated with them so users can keep pace. The few “lesser talked issues” that GitHub has already managed to solve are: #1 Unselect markers when copying and pasting the contents of a diff The + and - diff markers are no longer copied to the clipboard when users copy the contents of a diff. #2 Edit a repository’s README from the repository root If a user has the permission to push to a repository,  they can edit a README file from the repository root by clicking the pen icon to the right of the README’s file header. #3 Users can access their repositories straight from the profile dropdown Users can use the profile dropdown, on any page, to quickly go straight to the “Your Repositories” tab within their user profile. #4 Highlight permalinked comments When following a permalink to a specific comment in an issue or pull request, the comment will be highlighted so that a user can easily find it among other comments in the thread. #5 Remove files from a pull request with a button If a user has a write permission, he can click on the ‘trash’ icon for a file right in the pull request’s “Files changed” view to make a commit and remove it. #6 Branch names in merge notification emails The email notification from GitHub about a merge will also include the name of the base branch that the change was merged into. #7 Users can create new pull requests from their repository’s Pull Requests Page In order to quickly create a pull request without having to switch back to the “Code” tab, when a user push branches while using the “Pull requests” tab, GitHub will now display the dynamic “Compare and pull request” widget. #8 Add a teammate from the team discussions page Users can add an organization member to a team directly from the team discussion page by clicking the + button inside the sidebar. #9 Collapse all diffs in a pull request at once When a pull request contains a lot of changed files, code reviewers find it hard to isolate the changes that are necessary/ important to them. Project paper cut allows them to collapse or expand the contents of all diffs in a pull request. This can be done by holding down the alt key and clicking on the inverted caret icon in any file header. They can also use the “Jump to file or symbol” dropdown to jump to the file that they are interested to review to automatically expand it. #10 Copy the URL of a comment Previously, in order to grab a permalink to a comment within an issue or pull request, users would have to copy the URL from a comment’s timestamp. They can now click Copy URL within the comment’s options menu to quickly copy the URL to the clipboard. Project Paper Cuts is solely aimed to help all developers do their best work, faster. By incorporating customers feedback into making this project, GitHub is paving the way to make small changes in the way it works. You can read the detailed announcement on the Github Blog to know more about Project Paper Cuts. Git-bug: A new distributed bug tracker embedded in git Microsoft’s GitHub acquisition is good for the open source community GitHub open sources its GitHub Load Balancer (GLB) Director  
Read more
  • 0
  • 1
  • 15215

article-image-tensorflow-2-0-beta-releases-with-distribution-strategy-api-freeze-easy-model-building-with-keras-and-more
Vincy Davis
10 Jun 2019
5 min read
Save for later

TensorFlow 2.0 beta releases with distribution strategy, API freeze, easy model building with Keras and more

Vincy Davis
10 Jun 2019
5 min read
After all the hype and waiting, Google has finally announced the beta version of TensorFlow 2.0. The focus feature is the tf.distribute.Strategy which distributes training across multiple GPUs, multiple machines or TPUs with minimal code changes. TensorFlow 2.0 beta version also has a number of major improvements, breaking changes and multiple bug fixes. Earlier this year, the TensorFlow team had updated the users on what to expect from TensorFlow 2.0. The 2.0 API is final with the symbol renaming/deprecation changes completed. The 2.0 API is ready and available as part of the TensorFlow 1.14 release in compat.v2 module. TensorFlow 2.0 support for Keras features Distribution Strategy for hardware The tf.distribute.Strategy supports multiple user segments, including researchers, ML engineers, etc. It also provides good performance and easy switching between strategies. Users can use the tf.distribute.Strategy API to distribute training across multiple GPUs, multiple machines or TPUs. Users can distribute their existing models and training code with minimal code changes. The tf.distribute.Strategy can be used with: TensorFlow's high level APIs Tf.keras Tf.estimator Custom training loops TenserFlow 2.0 beta also simplifies the API for custom training loops. This is also based on the distribution strategy - tf.distribute.Strategys. Custom training loops give flexibility and a greater control on training. It is also easier to debug the model and the training loop. Model Subclassing Building a fully-customizable model by subclassing tf.keras.Model, allows user to define its own forward pass. Layers can be created in the  __init__  method and set them as attributes of the class instance. The forward pass is defined in the call method. Model subclassing is particularly useful when eager execution is enabled, because it allows the forward pass to be written imperatively. Model subclassing gives greater flexibility when creating models that are not easily expressible. Breaking Changes The tf.contrib has been deprecated and its functionality has been migrated to the core TensorFlow API, to tensorflow/addons or removed entirely. In the tf.estimator.DNN/Linear/DNNLinearCombined family, the premade estimators have been updated to use the tf.keras.optimizers instead of the tf.compat.v1.train.OptimizerS. A checkpoint converter tool, for converting optimizers has also been included with this release. Bug Fixes and Other Changes This beta version of 2.0 includes many bug fixes and other changes. Some of them are mentioned below: In the tf.data.Options, the experimental_numa_aware option has been removed and a support for TensorArrays has been added. The tf.keras.estimator.model_to_estimator now supports exporting to tf.train.Checkpoint format. This allows the saved checkpoints to be compatible with model.load_weights. The tf.contrib.estimator.add_metrics has been replaced with tf.estimator.add_metrics. Gradient for SparseToDense op, GPU implementation of tf.linalg.tridiagonal_solve, broadcasting support to tf.matmul has been added. This beta version also exposes a flag that allows the number of threads to vary across Python benchmarks. The unused StringViewVariantWrapper and the tf.string_split from v2 API has been removed. The TensorFlow team has provided a TF 2.0 Testing User Group to users for any snag experience and for feedback purpose. General reaction to the release of TensorFlow 2.0 beta is positive. https://twitter.com/markcartertm/status/1137238238748266496 https://twitter.com/tonypeng_Synced/status/1137128559414087680 A user on reddit comments, “Can't wait to try that out !” However some users have compared it to PyTorch calling it more comprehensive than TensorFlow. PyTorch provides a more powerful platform for research and is good for production. A user on Hacker News comments, “Maybe I'll give TF another try, but right now I'm really liking PyTorch. With TensorFlow I always felt like my models were buried deep in the machine and it was very hard to inspect and change them, and if I wanted to do something non-standard it was difficult even with Keras. With PyTorch though, I connect things however how I want, write whatever training logic I want, and I feel like my model is right in my hands. It's great for research and proofs-of-concept. Maybe for production too.” Another user says that “Might give it another try, but my latest incursion in the Tensorflow universe did not end pleasantly. I ended up recording everything in Pytorch, took me less than a day to do the stuff that took me more than a week in TF. One problem is that there are too many ways to do the same thing in TF and it's hard to transition from one to the other.” The TensorFlow team hopes to resolve all the additional issues before the release candidate (RC) 2.0 version, including complete Keras model support on Cloud TPUs and TPU pods and improve the overall performance of 2.0. The RC release is expected sometime this summer. Introducing TensorFlow Graphics packed with TensorBoard 3D, object transformations, and much more Horovod: an open-source distributed training framework by Uber for TensorFlow, Keras, PyTorch, and MXNet ML.NET 1.0 RC releases with support for TensorFlow models and much more!
Read more
  • 0
  • 0
  • 15177

article-image-pytorch-announces-the-availability-of-pytorch-hub-for-improving-machine-learning-research-reproducibility
Amrata Joshi
11 Jun 2019
3 min read
Save for later

PyTorch announces the availability of PyTorch Hub for improving machine learning research reproducibility

Amrata Joshi
11 Jun 2019
3 min read
Yesterday, the team at PyTorch announced the availability of PyTorch Hub which is a simple API and workflow that offers the basic building blocks to improve machine learning research reproducibility. Reproducibility plays an important role in research as it is an essential requirement for a lot of fields related to research including the ones based on machine learning techniques. But most of the machine learning based research publications are either not reproducible or are too difficult to reproduce. With the increase in the number of research publications, tens of thousands of papers being hosted on arXiv and submissions to conferences, research reproducibility has now become even more important. Though most of the publications are accompanied by code and trained models that are useful but still it is difficult for users to figure out for most of the steps, themselves. PyTorch Hub consists of a pre-trained model repository that is designed to facilitate research reproducibility and also to enable new research. It provides built-in support for Colab, integration with Papers With Code and also contains a set of models including classification and segmentation, transformers, generative, etc. By adding a simple hubconf.py file, it supports the publication of pre-trained models to a GitHub repository, which provides a list of models that are to be supported and a list of dependencies that are required for running the models. For example, one can check out the torchvision, huggingface-bert and gan-model-zoo repositories. Considering the case of torchvision hubconf.py: In torchvision repository, each of the model files can function and can be executed independently. These model files don’t require any package except for PyTorch and they don't need separate entry-points. A hubconf.py can help users to send a pull request based on the template mentioned on the GitHub page. The official blog post reads, “Our goal is to curate high-quality, easily-reproducible, maximally-beneficial models for research reproducibility. Hence, we may work with you to refine your pull request and in some cases reject some low-quality models to be published. Once we accept your pull request, your model will soon appear on Pytorch hub webpage for all users to explore.” PyTorch Hub allows users to explore available models, load a model as well as understand the kind of methods available for any given model. Below mentioned are few of the examples: Explore available entrypoints: With the help of torch.hub.list() API, users can now list all available entrypoints in a repo.  PyTorch Hub also allows auxillary entrypoints apart from pretrained models such as bertTokenizer for preprocessing in the BERT models and making the user workflow smoother. Load a model: With the help of torch.hub.load() API, users can load a model entrypoint. This API can also provide useful information about instantiating the model. Most of the users are happy about this news as they think it will be useful for them. A user commented on HackerNews, “I love that the tooling for ML experimentation is becoming more mature. Keeping track of hyperparameters, training/validation/test experiment test set manifests, code state, etc is both extremely crucial and extremely necessary.” Another user commented, “This will also make things easier for people writing algorithms on top of one of the base models.” To know more about this news, check out PyTorch’s blog post. Sherin Thomas explains how to build a pipeline in PyTorch for deep learning workflows F8 PyTorch announcements: PyTorch 1.1 releases with new AI tools, open sourcing BoTorch and Ax, and more Horovod: an open-source distributed training framework by Uber for TensorFlow, Keras, PyTorch, and MXNet   .
Read more
  • 0
  • 0
  • 15057
article-image-fedora-workstation-31-to-come-with-wayland-support-improved-core-features-of-pipewire-and-more
Bhagyashree R
26 Jun 2019
3 min read
Save for later

Fedora Workstation 31 to come with Wayland support, improved core features of PipeWire, and more

Bhagyashree R
26 Jun 2019
3 min read
On Monday, Christian F.K. Schaller, Senior Manager for Desktop at Red Hat, shared a blog post that outlined the various improvements and features coming in Fedora Workstation 31. These include Wayland improvements, more PipeWire functionality, continued improvements around Flatpak, Fleet Commander, and more.  Here are some of the enhancements coming to Fedora Workstation 31: Wayland transitioning to complete soon Wayland is a desktop server protocol that was introduced to replace the X Windowing System with a modern and simpler windowing system in Linux and other Unix-like operating systems. The team is focusing on removing the X Windowing System dependency so that the GNOME Shell will be able to run without the need of XWayland.  Schaller shared that the work related to removing X dependency is done for the shell itself. However, some things are left in regards to the GNOME Setting daemon. Once this work is complete an X server (XWayland) will only start if an X application is run and will shut down when the application is stopped. Another aspect that the team is working on is allowing X applications to run as root under XWayland. Running desktop applications as root is generally not considered safe. However, there are few applications that only work when they are run as root. This is why the team has decided to continue support for running applications as root in XWayland. The team is also adding support for NVidia binary driver to allow running a native Wayland session on top of the binary driver. PipeWire with improved desktop sharing portal PipeWire is a multimedia framework that aims to improve the handling of audio and video in Linux. This release will come with more improved core features of PipeWire. The existing desktop sharing portal is now enhanced and will soon have Miracast support. The team’s ultimate goal is to make the GNOME integration even more seamless than the standalone app.  Better infrastructure for building Flatpaks Flatpak is a utility for software deployment and package management in Linux. The team is making the infrastructure for building Flatpaks from RPMS better. They will also be offering applications from flathub.io and quay.io out of the box and in accordance with Fedora rules for third-party software. The team will also be making a Red Hat UBI based runtime available. A third-party developer can use this runtime to build their applications and be sure that it will be supported by Red Hat for the lifetime of a given RHEL release. Fedora Toolbox with improved GNOME Terminal  Fedora Toolbox is a tool that gives developers a seamless experience when using an immutable OS like Silverblue. Currently, improvements are being done to GNOME Terminal that will ensure a more natural behavior inside the terminal when interacting with pet containers. The is looking for ways to make the selection of containers more discoverable so that developers will easily get access to a Red Hat UBI container or a Red Hat TensorFlow container for instance.  Along with these, the team is improving the infrastructure for Linux fingerprint reader support, securing Gamemode, adding support for Dell Totem, improving media codec support, and more. To know more in detail check out Schaller’s blog post. Fedora 30 releases with GCC 9.0, GNOME 3.32, performance improvements, and much more! Fedora 30 Beta released with desktop environment options, GNOME 3.32, and much more Fedora 31 will now come with Mono 5 to offer open-source .NET support
Read more
  • 0
  • 0
  • 15022

article-image-qt-introduces-qt-for-mcus-a-graphics-toolkit-for-creating-a-fluid-user-interface-on-microcontrollers
Vincy Davis
22 Aug 2019
2 min read
Save for later

Qt introduces Qt for MCUs, a graphics toolkit for creating a fluid user interface on microcontrollers

Vincy Davis
22 Aug 2019
2 min read
Yesterday, the Qt team introduced a new graphics toolkit called Qt for MCUs for creating fluid user interfaces (UIs) on cost-effective microcontrollers (MCUs). The toolkit will enable new and existing users to take advantage of the existing Qt tools and libraries used for Device Creation, thus enabling companies to provide better user experience.  Petteri Holländer, the Senior Vice President of Product Management at Qt said, “With the introduction of Qt for MCUs, customers can now use Qt for almost any software project they’re working on, regardless of target – with the added convenience of using just one technology framework and toolset.” He further adds, “This means that both existing and new Qt customers can pursue the many business growth opportunities offered by connected devices – across a wide and diverse range of industries.” Qt for MCUs utilizes the Qt Modeling Language (QML) and the developer-designing tools for constructing a fast and customized Qt application. “With the frontend defined in declarative QML and the business logic implemented in C/C++, the end result is a fluid graphical UI application running on microcontrollers,”  says the Qt team. Key benefits offered by Qt for MCUs Existing skill sets can be reused for Qt for microcontrollers Same technology can be used in high-end and mass market devices, thus yielding low maintenance cost No compromise on graphics performance, hence reduced hardware costs Users can upgrade to the cross-platform graphical toolkit from a legacy solution Check out the Qt for MCUs website for more information. Qt and LG Electronics partner to make webOS as the platform of choice for embedded smart devices Qt 5.13 releases with a fully-supported WebAssembly module, Chromium 73 support, and more Qt Creator 4.9.0 released with language support, QML support, profiling and much more
Read more
  • 0
  • 0
  • 15020
Modal Close icon
Modal Close icon