Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-6-powerful-microbots-developed-by-researchers-around-the-world
Prasad Ramesh
01 Sep 2018
4 min read
Save for later

6 powerful microbots developed by researchers around the world

Prasad Ramesh
01 Sep 2018
4 min read
When we hear  the word robot, we may think of large industry sized robots assembling cars or humanoid ones. However there are such tiny robots that you may not even be able to see with the naked eye. Such six microbots are covered in this article which are in early development stages. Harvard's Ambulatory Microrobot (HAMR): A robotic cockroach Source: Hardvard HAMR is a versatile, 1.8-inch-long robotic platform that resembles a cockroach. The HAMR itself weighs in under an ounce and can run, jump and carry small items about twice its own weight. It is fast and can move with the speed of almost 19 inches per second. HAMR has given the researchers a useful base idea from which they can build other ideas. For example, the HAMR-F, an enhanced version of HAMR doesn't have any restraining wires. It can move around independently, it's only slightly heavier (2.8g) and slower than the HAMR. It is powered by a micro 8mA lithium polymer battery. Scientists at Harvard's School of Engineering and Applied Sciences also added footpads recently that allows the microbot to swim on water surface, sink and walk under water. Robotic bees: RoboBees Source: Harvard Like the HAMR, the RoboBee by Harvard has improved over time, it can also fly and swim. Its first successful flight was in 2013 and in 2015 it was able to swim. More recently in 2016, it gained the ability to "perch" on surfaces using static electricity. This allows the RoboBee to save power for loner flights. The 80-milligram robot can take a swim, leap up from the water, and then land. The RoboBee can flap its wings at 220 to 300 hertz in air and 9 to 13 hertz in water. μRobotex: microbots from France Source: Sciencealert Scientists from the Femto-ST Institute in France have built the μRobotex platform. It is a new, extremely small microrobot system. This system has been able to build the smallest house in the world inside a vacuum chamber. The robot used an ion beam to cut a silica membrane to tiny pieces for assembly. The micro house is 0.015 mm high and 0.020 mm broad. In comparison, a grain of sand is anywhere from 0.05 mm to 2 mm in diameter. The completed house was kept on the tip of an optical fiber piece as shown in the image above. Salto: a one-legged jumper Source: Wired Saltatorial locomotion on terrain obstacles (Salto), developed at University of California, is a one-legged jumping robot that is 10.2 inches tall when fully extended. It weighs about 100 grams, and can jump up to 1 meter in air. Salto's skills show when it can do more than just a single jump. It can bounce off walls and can perform several jumps in a row while avoiding obstacles. Salto was inspired by the galago, a small mammal expert at jumping. The idea of Salto was about robots that can leap over rubble, to provide emergency services. The newer model is the Salto-1P. Rolls Royce’s SWARM robots Source: Rolls Royce Rolls-Royce teamed up with scholars from the University of Nottingham and Harvard University to develop independent tiny mobile robots called SWARM. They are about 0.4 inches in diameter. They are a part of Rolls-Royce’s IntelligentEngine program. The SWARM robots are put into position by a robotic snake and use tiny cameras to capture parts of an engine which are hard to access otherwise. This is very useful for mechanics to figure out what is wrong with a car engine with greater accessibility. The future plan for SWARM is to perform inspections of aircraft engines in order to not remove from the airplanes. Short-Range Independent Microrobotic Platforms (SHRIMP) Source: DARPA The Defense Advanced Research Project Agency (DARPA) wants to develop insect-scaled robots with, "untethered mobility, maneuverability, and dexterity." In other words, they want microbots that can move around independently. DARPA is planning to sponsor these robots as part of the SHRIMP program for search and rescue, disaster relief, and hazardous environment inspection. It is also looking for robots that might work as prosthetics or eyes to see in places that are hard to reach. These microbots are in early development stages but on entering production they will be very resourceful. From medical assistance to guided inspection in small areas, these microbots will prove to be useful in a variety of areas. Intelligent mobile projects with TensorFlow: Build a basic Raspberry Pi robot that listens, moves, sees, and speaks [Tutorial] 15 millions jobs in Britain at stake with AI robots set to replace humans at workforce What Should We Watch Tonight? Ask a Robot, says Matt Jones from OVO Mobile [Interview]
Read more
  • 0
  • 1
  • 19746

article-image-servicenow-partners-with-ibm-on-aiops-from-devops-com
Matthew Emerick
16 Oct 2020
1 min read
Save for later

ServiceNow Partners with IBM on AIOps from DevOps.com

Matthew Emerick
16 Oct 2020
1 min read
ServiceNow and IBM this week announced that the Watson artificial intelligence for IT operations (AIOps) platform from IBM will be integrated with the IT service management (ITSM) platform from ServiceNow. Pablo Stern, senior vice president for IT workflow products for ServiceNow, said once that capability becomes available later this year on the Now platform, IT […] The post ServiceNow Partners with IBM on AIOps appeared first on DevOps.com.
Read more
  • 0
  • 0
  • 19734

article-image-looking-back-a-year-or-two-in-review-of-tableau-web-authoring-from-whats-new
Anonymous
29 Dec 2020
6 min read
Save for later

Looking back: A year (or two) in review of Tableau web authoring from What's New

Anonymous
29 Dec 2020
6 min read
Kevin Mason Senior Product Manager, Tableau Kristin Adderson December 29, 2020 - 8:23pm December 29, 2020 We are wrapping up 2020 with some well-timed holiday goodies. Tableau 2020.4 marks a very special milestone on our web authoring journey with the completion of the most requested web feature requests from the last two years, as well as the exciting release of Tableau Prep Builder on the web!  Our dev teams have been hard at work building Tableau into a true SaaS solution. With the majority of people working from home, web authoring has been particularly important to quickly give analysts access to the right data from anywhere without requiring a top-of-the-line computer to run analyses—a simple browser and reliable internet connection will do.  Tableau 2020.4 DemoWith the year coming to an end, we thought it would be fun to reflect and celebrate how far we’ve come on this journey to the web.  Humble beginnings During the early 2010s, the benefits of SaaS began to bear fruit. Tableau Desktop was our bread and butter, but required IT to install the software directly onto folks’ computers and maintain each individual license. This is where Tableau began to invest in Tableau Server, Tableau Online, and web authoring, though it was pretty limited in the early days. Can you believe web authoring didn’t even have dashboards until 2016?! Oh how time flies.  Old Tableau DemoMuch to our excitement, customers like Oldcastle saw the potential web authoring could bring to its organization. Oldcastle shared how it was encouraging employees to ask more data-driven questions and dig deeper using web authoring at TC15. As a pioneer for effectively using web authoring (even before dashboard editing!), Oldcastle’s TC talk is still relevant today. Oldcastle TC15 DemoAs part of Tableau Server and Tableau Online, web authoring offers a lot of benefits. It can be centrally managed, which simplifies deployment, license management, and version control. This means: Everyone in the organization gets the latest version during a Server or Online update, no individual Desktop updates needed.   Since all workbooks are stored on the Server, IT professionals have more visibility into what people are creating which helps with data governance and resource management.  IT teams don’t have to worry about managing multiple individual licenses—with web authoring, they can maintain licenses, upgrades, and content all on Tableau Server or Tableau Online. Analysts don’t have to context switch back to Desktop to make small changes. It can all be done in the same, single place.  An end-to-end experience in the browser Since then, we have been hard at work bringing much-loved Desktop features into the browser—we’re talking full home remodeling, down to the studs (basically Extreme Makeover: Tableau Edition). Our 2018.1 release saw the biggest change, with the ability to connect to data from the web, plus our new role-based pricing model. Parameters (2019.1), tooltips (2020.1), and filters (2020.3) were soon to follow. Finally, Tableau 2020.4 was extra-special, bringing the last of the most requested features you have patiently been waiting for to the web: actions, sets, and extracts. We heard the cries, demands, and pleas for the last three years, and I’m thrilled to say that web authoring has achieved parity with the Tableau Desktop you know and love! 2020.4 also includes Apply Filters to Selected Worksheets!During this journey, early adopters continued to share their success stories. At TC18, DISH Network illustrated how a few teams rolled out web authoring broadly in the organization and set up specific training sessions for new users. By setting up Web Authoring for analysts across the organization, DISH dramatically reduced the number of ad hoc requests its primary analytics teams would receive. As a result, the primary teams can focus on the larger, org-wide projects while everyday analysts are able to self-serve their own ad hoc requests for query and visualization changes. DISH still serves as an excellent example of how to create a data-driven culture.  Try it out yourself this new year Oldcastle and DISH are just two examples of the many customers finding success with web authoring. Even our sales team uses web authoring to build dashboards for the majority of their demos! Over the last 18 months, more customers are asking how to use web authoring to help expand their use of data throughout their business.  If you are curious to learn more, including some best practices, check out my Tableau Community post. I collected all the various resources with real customer examples and numerous videos from Tableau Conferences.  Or jump right in! Create a new workbook from scratch right on the web by clicking “New” > “Workbook” on the Explore page.  If you have the right permissions, you can edit existing workbooks by clicking the “Edit” button on the toolbar.  We’ve certainly come a long way, together As we close 2020, we would like to thank you. We really appreciate your patience as we rebuilt much-loved Desktop features and we cannot thank you enough for helping us identify which ones were most important to you.  Thank you to the 20,000+ that have participated in beta programs, posted on the Community Forums, and shared candid feedback while our PMs and user researchers pestered you with questions. It’s a little corny, but it’s true: you are what makes this #DataFam as special as it is. With your help, we were able to prioritize these web features among new analytics capabilities like viz in tooltip, nested sorting, spatial joins, and set actions!  We are excited to see what you build on the web using Tableau 2020.4. And I’m even more excited to show you what’s coming in 2021. In the coming releases, you will see more web-first features. After all, web applications are, well, web applications—so we expect them to behave a little differently, and certainly faster, than ye ol’ Desktop. I can’t share exact details, but you can expect investments that will make Tableau an exceptional web experience. And don’t worry—we are still delivering the very few remaining Desktop-loved features to the browser. We are just adding some special web-first considerations to them! Happy Holidays from all of us, to you. Here’s to 2021!  
Read more
  • 0
  • 0
  • 19731

article-image-python-3-7-as-the-second-generation-google-app-engine-standard-runtime
Sugandha Lahoti
09 Aug 2018
2 min read
Save for later

Python 3.7 beta is available as the second generation Google App Engine standard runtime

Sugandha Lahoti
09 Aug 2018
2 min read
Google has announced the availability of Python 3.7 in beta on the App Engine standard environment. Developers can now easily run their web apps using up-to-date versions of popular languages, frameworks, and libraries, with Python being one of them. The Second Generation runtimes remove previous App Engine restrictions, giving developers the ability to write portable web apps and microservices. Now web apps can take full advantage of App Engine features such as auto-scaling, built-in security, and pay-per-use billing model. Python 3.7 was introduced as one of the new Second Generation runtimes at Cloud Next. Python 3.7 runtime brings developers up-to-date with the language community's progress. As a Second Generation runtime, it enables a faster path to continued runtime updates. It also supports arbitrary third-party libraries, including those that rely on C code and native extensions. The new Python 3.7 runtime also supports the Google Cloud client libraries. Developers can integrate GCP services into their app, and run it on App Engine, Compute Engine or any other platform. LumApps, a Paris-based provider of enterprise Intranet software, has chosen App Engine to optimize for scale and developer productivity. Elie Mélois, CTO & Co-founder, LumApps says, "With the new Python 3.7 runtime on App Engine standard, we were able to deploy our apps very quickly, using libraries that we wanted such as scikit. App Engine helped us scale our platform from zero to over 2.5M users, from three developers to 40—all this with only one DevOps person! " Check out the documentation to start using Python 3.7 today on the App Engine standard environment. Deploying Node.js apps on Google App Engine is now easy Hosting on Google App Engine Should you move to Python 3? 7 Python experts’ opinions
Read more
  • 0
  • 0
  • 19729

article-image-debian-10-2-buster-linux-distribution-releases-with-the-latest-security-and-bug-fixes
Bhagyashree R
18 Nov 2019
3 min read
Save for later

Debian 10.2 Buster Linux distribution releases with the latest security and bug fixes

Bhagyashree R
18 Nov 2019
3 min read
Last week, the Debian team released Debian 10.2 as the latest point release to the "Buster" series. This release includes a number of bug fixes and security updates. In addition, starting this release Firefox ESR (Extended Support Release) is no longer supported on the ARMEL variant of Debian. Key updates in Debian 10.2 Security updates Some of the security fixes added in Debian 10.2 are: Apache2: These five vulnerabilities reported in the Apache HTTPD server are fixed:  CVE-2019-9517, CVE-2019-10081, CVE-2019-10082, CVE-2019-10092, CVE-2019-10097, CVE-2019-10098. Nghttp2: Two vulnerabilities, CVE-2019-9511 and CVE-2019-9513 found in the HTTP/2 code of the nghttp2 HTTP server are fixed. PHP 7.3: In PHP five security issues were fixed that could result in information disclosure or denial of service. These were CVE-2019-11036, CVE-2019-11039, CVE-2019-11040, CVE-2019-11041, CVE-2019-11042. Linux: In the Linux kernel five security issues were fixed that may have otherwise lead to a privilege escalation, denial of service, or information leaks. These were CVE-2019-14821, CVE-2019-14835, CVE-2019-15117, CVE-2019-15118, CVE-2019-15902. Thunderbird: The security issues reported in Thunderbird could have potentially resulted in the execution of arbitrary code, cross-site scripting, and information disclosure. These are tracked as CVE-2019-11739, CVE-2019-11740, CVE-2019-11742, CVE-2019-11743, CVE-2019-11744, CVE-2019-11746, CVE-2019-11752. Bug fixes Debian 10.2 brings several new bug fixes for some popular packages, some of which are: Emacs: The European Patent Litigation Agreement (EPLA) key is now updated. Flatpak: Debian 10.2 includes the new upstream stable release of Flatpak, a tool for building and distributing desktop applications on Linux. GNOME Shell: In addition to including the new upstream stable release of GNOME Shell, this release fixes truncation of long messages in Shell-modal dialogs and avoids crash on the reallocation of dead actors LibreOffice: The PostgreSQL driver with PostgreSQL 12 is now fixed. Systemd: Starting from Debian 10.2, the reload failure does not get propagated to service results. The ‘sync_file_range’ failures in nspawn containers on ARM and PPC systems are fixed. uBlock: The uBlock adblocker is updated to its new upstream version and is compatible with Firefox ESR68. These were some of the updates in Debian 10.2. Check out the official announcement by the Debian team to know what else has shipped in this release. Severity issues raised for Python 2 Debian packages for not supporting Python 3 Debian 10 codenamed ‘buster’ released, along with Debian GNU/Hurd 2019 as a port Debian GNU/Linux port for RISC-V 64-bits: Why it matters and roadmap
Read more
  • 0
  • 0
  • 19712

article-image-electron-5-0-ships-with-new-versions-of-chromium-v8-and-node-js
Sugandha Lahoti
25 Apr 2019
2 min read
Save for later

Electron 5.0 ships with new versions of Chromium, V8, and Node.js

Sugandha Lahoti
25 Apr 2019
2 min read
After publicly sharing the release timeline for Electron 5.0 and beyond in February, On Tuesday Electron 5.0 was released, as per the plan, with new features, upgrades, and fixes. Electron ships with the latest version upgrades of core components Chromium, Node.js, and V8: Chromium 73.0.3683.119, Node.js 12.0.0, and V8 7.3.492.27. Electron 5.0 also includes improvements to Electron-specific APIs. With this release, Electron 2.0.x has reached end of life. Major changes in Electron 5.0 Packaged apps will now behave the same as the default app. A default application menu will be created (unless the app has one) and the window-all-closed event will be automatically handled. (unless the app handles the event) Mixed sandbox mode is now enabled by default. Renderers launched with sandbox: true will now be actually sandboxed, where previously they would only be sandboxed if mixed-sandbox mode was also enabled. The default values of nodeIntegration and webviewTag are now false to improve security. The SpellCheck API has been changed to provide asynchronous results. New features BrowserWindow now supports managing multiple BrowserViews within the same BrowserWindow. Electron 5 continues with Electron's Promisification initiative.  This initiative will convert callback-based functions in Electron to return Promises. During this transition period, both the callback and Promise-based versions of these functions will work correctly, and will both be documented. A total of 12 APIs were converted for Electron 5.0. Three functions were changed or added to systemPreferences to access macOS systems' colors. These include systemPreferences.getAccentColor, systemPreferences.getColor, and systemPreferences.getSystemColor The function process.getProcessMemoryInfo has been added to get memory usage statistics about the current process. New remote events have been added to improve security in the remote API. Now, remote.getBuiltin, remote.getCurrentWindow, remote.getCurrentWebContents and <webview>.getWebContents can be filtered. Deprecated APIs Three APIs are newly deprecated in Electron 5.0.0 and planned for removal in 6.0.0. These include Mksnapshot binaries for arm and arm64, ServiceWorker APIs on WebContents, and Automatic modules with sandboxed webContents. These are just a select few updates. For other specific details, you may see the release notes.  Also, check out the tentative 6.0.0 schedule for key dates in the Electron 6 development life cycle. Users can install Electron 5.0 with npm via npm install electron@latest or download the tarballs from Electron releases page. The Electron team publicly shares the release timeline for Electron 5.0 Flutter challenges Electron, soon to release a desktop client to accelerate mobile development How to create a desktop application with Electron [Tutorial]
Read more
  • 0
  • 0
  • 19667
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-ietf-proposes-json-meta-application-protocol-jmap-as-the-next-standard-for-email-protocols
Bhagyashree R
22 Jul 2019
4 min read
Save for later

IETF proposes JSON Meta Application Protocol (JMAP) as the next standard for email protocols

Bhagyashree R
22 Jul 2019
4 min read
Last week, the Internet Engineering Task Force (IETF) published JSON Meta Application Protocol (JMAP) as RFC 8260, now marked as “Proposed Standard”. The protocol is authored by Neil Jenkins, Director and UX Architect at Fastmail and Chris Newman, Principle Engineer at Oracle. https://twitter.com/Fastmail/status/1152281229083009025 What is JSON Meta Application Protocol (JMAP)? Fastmail started working on JMAP in 2014 as an internal development project. It is an internet protocol that handles the submission and synchronization of emails, contacts, and calendars between a client and a server providing a consistent interface to different data types. It is developed to be a possible successor to IMAP and a potential replacement for the CardDAV and CalDAV standards. Why is it needed? According to the developers, the current standards for email protocols, that is IMAP and SMTP, for client-server communication are outdated and complicated. They are not well-suited for modern mobile networks and high-latency scenarios. These limitations in current standards have led to stagnation in the development of new good email clients. Many have also started coming up with proprietary alternatives like Gmail, Outlook, Nylas, and Context.io. Another drawback is that many mobile email clients proxy everything via their own server instead of talking directly to the user’s mail store, for example, Outlook and Newton. This is not only bad for client authors who have to run server infrastructure in addition to just building their clients, but also raises security and privacy concerns. Here’s a video by FastMail explaining the purpose behind JMAP: https://www.youtube.com/watch?v=8qCSK-aGSBA How JMAP solves the limitations in current standards? JMAP is designed to be easier for developers to work with and enable efficient use of network resources. Here are some of its properties that address the limitations in current standards: Stateless: It does not require a persistent connection, which fits best for mobile environments. Immutable Ids: It is more like NFS or filesystems with inodes rather than a name-based hierarchy, which makes renaming easy to detect and cheap to sync. Batchable API calls: It batches multiple API calls in a single request to the server resulting in reduced round trips and better battery life for mobile users. Provides flood control: The client can put limits on how much data the server is allowed to send. For instance, the command will return a ‘tooManyChanges’ error on exceeding the client’s limit, rather than returning a million * 1 EXPUNGED lines as can happen in IMAP. No custom parser required: Support for JSON, a well understood and widely supported encoding format, makes it easier for developers to get started. A backward compatible data model: Its data model is backward compatible with both IMAP folders and Gmail-style labels. Fastmail is already using JMAP in production for its Fastmail and Topicbox products. It is also seeing some adoption in organizations like the Apache Software Foundation, who added experimental support for JMAP in its free mail server Apache in version 3.0. Many developers are happy about this announcement. A user on Hacker News said, “JMAP client and the protocol impresses a lot. Just 1 to a few calls, you can re-sync entire emails state in all folders. With IMAP need to select each folder to inspect its state. Moreover, just a few IMAP servers support fast synchronization extensions like QRESYNC or CONDSTORE.” However, its use of JSON did spark some debate on Hacker News. “JSON is an incredibly inefficient format for shareable data: it is annoying to write, unsafe to parse and it even comes with a lot of overhead (colons, quotes, brackets and the like). I'd prefer s-expressions,” a user commented. To stay updated with the current developments in JMAP, you can join its mailing list. To read more about its specification check out its official website and also its GitHub repository. Wireshark for analyzing issues and malicious emails in POP, IMAP, and SMTP [Tutorial] Google announces the general availability of AMP for email, faces serious backlash from users Sublime Text 3.2 released with Git integration, improved themes, editor control and much more!  
Read more
  • 0
  • 0
  • 19649

article-image-kubernetes-1-11-is-here
Vijin Boricha
28 Jun 2018
3 min read
Save for later

Kubernetes 1.11 is here!

Vijin Boricha
28 Jun 2018
3 min read
This is the second release of Kubernetes in 2018. Kubernetes 1.11 comes with significant updates on features that revolve around maturity, scalability, and flexibility of Kubernetes.This newest version comes with storage and networking enhancements with which it is possible to plug-in any kind of infrastructure (Cloud or on-premise), into the Kubernetes system. Now let's dive into the key aspects of this release: IPVS-Based In-Cluster Service Load Balancing Promotes to General Availability IPVS consist of a simpler programming interface than iptable and delivers high-performance in-kernel load balancing. In this release it has moved to general availability where is provides better network throughput, programming latency, and scalability limits. It is not yet the default option but clusters can use it for production traffic. CoreDNS Graduates to General Availability CoreDNS has moved to general availability and is now the default option when using kubeadm. It is a flexible DNS server that directly integrates with the Kubernetes API. In comparison to the previous DNS server CoreDNS has lesser moving pasts as it is a single process that creates custom DNS entries to supports flexible uses cases. CoreDNS is also memory-safe as it is written in Go. Dynamic Kubelet Configuration Moves to Beta It has always been difficult to update Kubelet configurations in a running cluster as Kubelets are configured through command-line flags. With this feature moving to Beta, one can configure Kubelets in a live cluster through the API server. CSI enhancements Over the past few releases CSI (Container Storage Interface) has been a major focus area. This service was moved to Beta in version 1.10. In this version, the Kubernetes team continues to enhance CSI with a number of new features such as: Alpha support for raw block volumes to CSI Integrates CSI with the new kubelet plugin registration mechanism Easier to pass secrets to CSI plugins Enhanced Storage Features This release introduces online resizing of Persistent Volumes as an alpha feature. With this feature users can increase the PVs size without terminating pods or unmounting the volume. Users can update the PVC to request a new size and kubelet can resize the file system for the PVC. Dynamic maximum volume count is introduced as an alpha feature. With this new feature one can enable in-tree volume plugins to specify the number of volumes to be attached to a node, allowing the limit to vary based on the node type. In the earlier version the limits were configured through an environment variable. StorageObjectInUseProtection feature is now stable and prevents issues from deleting a Persistent Volume or a Persistent Volume Claim that is integrated to an active pod. You can know more about Kubernetes 1.11 from Kubernetes Blog and this version is available for download on GitHub. To get started with Kubernetes, check out our following books: Learning Kubernetes [Video] Kubernetes Cookbook - Second Edition Mastering Kubernetes - Second Edition Related Links VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service Rackspace now supports Kubernetes-as-a-Service Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads
Read more
  • 0
  • 0
  • 19608

article-image-vue-cli-3-the-standard-build-tool-for-vue-applications
Sugandha Lahoti
13 Aug 2018
3 min read
Save for later

Vue CLI 3.0 is here as the standard build toolchain behind Vue applications

Sugandha Lahoti
13 Aug 2018
3 min read
The team behind Vue has announced Vue CLI 3.0 as the standard build tool behind Vue applications. Vue CLI 3.0 minimizes the amount of configuration developers have to go through. At its core, it provides a pre-configured build setup on top of webpack 4 with features such as hot module replacement, code-splitting, tree-shaking, efficient long term caching, etc. Vue CLI3.0 also comes with a Modern mode where developers can ship native ES2017+ bundle and legacy bundle in parallel. Developers have a Multi-page mode to build an app with multiple HTML/JS entry points. Also, they can build Vue Single-File Components into a library or native web components. Developers are also provided with optional integrations (TypeScript, PWA, Vue Router & Vuex, ESLint / TSLint / Prettier, Unit Testing via Jest or Mocha, E2E Testing via Cypress or Nightwatch) that they can use when creating a new project. Vue CLI 3.0 comes with Zero configuration In most cases, developers just need to focus on writing the code. On scaffolding a project via Vue CLI 3.0, all redundant work such as installing the Vue CLI runtime service, selected feature plugins, and the necessary config files are done automatically. Vue CLI also ships with the vue inspect command to help developers inspect the internal webpack configuration with no ejection required to make small tweaks. A powerful Plugin system Vue CLI 3.0 has an extensible Plugin system which can inject dependencies and files during the app’s scaffolding phase tweak the app’s webpack config or inject additional commands to the CLI service during development. Developers can also create their own remote preset to share their selection of plugins and options with other developers. Instant Prototyping Vue CLI 3’s vue serve command, allows developers to start prototyping with Vue single-file components, without waiting for npm install. The prototyping dev server comes with the same setup of a standard app. This allows developers to easily move the prototype *.vue file into a properly scaffolded project’s src folder to continue working on it. Modern Mode Vue CLI 3.0 has a modern mode with produces two versions of an app. First, a modern bundle targeting modern browsers that support ES modules, and second a legacy bundle targeting older browsers not supporting ES modules The modern bundle is loaded with <script type="module">, in browsers that support it. The legacy bundle is loaded with <script nomodule>, which is ignored by browsers that support ES modules. Modern mode can be activated using the following command: Vue-cli-service build --modern This release focuses on making Vue CLI as the standard build toolchain for Vue applications. However, the longer-term goal for Vue CLI is to incorporate best practices from both the present and the future into the toolchain. Vue CLI 3.0 can be tried by following the instructions from the docs. The list of all updates are available on the Vue Medium Blog. Introducing Vue Native for building native mobile apps with Vue.js Why has Vue.js become so popular? How to navigate files in a Vue app using the Dropbox API
Read more
  • 0
  • 0
  • 19599

article-image-microsoft-announces-azure-devops-makes-azure-pipelines-available-on-github-marketplace
Melisha Dsouza
11 Sep 2018
4 min read
Save for later

Microsoft announces Azure DevOps, makes Azure pipelines available on GitHub Marketplace

Melisha Dsouza
11 Sep 2018
4 min read
Microsoft is rebranding Visual Studio Team Services(VSTS) to Azure DevOps along with  Azure DevOps Server, the successor of Team Foundation Server (TFS). Microsoft understands that DevOps has become increasingly critical to a team’s success. The re-branding is done to achieve the aim of shipping higher quality software in a short span of time. Azure DevOps supports both public and private cloud configurations. The services are open and extensible and designed to work with any type of application, framework, platform, or cloud. Since Azure DevOps services work great together, users can gain more control over their projects. Azure DevOps is free for open source projects and small projects including up to five users. For larger teams, the cost ranges from $30 per month to $6,150 per month, depending upon the number of users. VSTS users will be upgraded into Azure DevOps projects automatically without any loss of functionally. URLs will be changed from abc.visualstudio.com to dev.azure.com/abc. Redirects from visualstudio.com URLs will be supported to avoid broken links. New users will get the update starting 10th September 2018, and existing users can expect the update in coming months. Key features in Azure DevOps: #1 Azure Boards Users can keep track of their work at every development stage with Kanban boards, backlogs, team dashboards, and custom reporting. Built-in scrum boards and planning tools help in planning meetings while gaining new insights into the health and status of projects with powerful analytics tools. #2 Azure Artifacts Users can easily manage Maven, npm, and NuGet package feeds from public and private sources. Code storing and sharing across small teams and large enterprises is now efficient thanks to Azure Artifacts. Users can Share packages, and use built-in CI/CD, versioning, and testing. They can easily access all their artifacts in builds and releases. #3 Azure Repos Users can enjoy unlimited cloud-hosted private Git repos for their projects.  They can securely connect with and push code into their Git repos from any IDE, editor, or Git client. Code-aware searches help them find what they are looking for. They can perform effective Git code reviews and use forks to promote collaboration with inner source workflows. Azure repos help users maintain a high code quality by requiring code reviewer sign off, successful builds, and passing tests before pull requests can be merged. #4 Azure Test Plans Users can improve their code quality using planned and exploratory testing services for their apps. These Test plans help users in capturing rich scenario data, testing their application and taking advantage of end-to-end traceability. #5 Azure Pipelines There’s more in store for VSTS users. For a seamless developer experience, Azure Pipelines is also now available in the GitHub Marketplace. Users can easily configure a CI/CD pipeline for any Azure application using their preferred language and framework. These Pipelines can be built and deployed with ease. They provide users with status reports, annotated code, and detailed information on changes to the repo within the GitHub interface. The pipelines Work with any platform- like Azure, Amazon Web Services, and Google Cloud Platform. They can run on apps with operating systems, including Android, iOS, Linux, macOS, and Windows systems. The Pipelines are free for open source projects. Microsoft has tried to update user experience by introducing these upgrades. Are you excited yet? You can learn more at the Microsoft live Azure DevOps keynote today at 8:00 a.m. Pacific and a workshop with Q&A on September 17 at 8:30 a.m. Pacific on Microsoft’s events page. You can read all the details of the announcement on Microsoft’s official Blog. Real clouds take out Microsoft’s Azure Cloud; users, developers suffer indefinite Azure outage Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence 8 ways Artificial Intelligence can improve DevOps  
Read more
  • 0
  • 0
  • 19585
article-image-ros-melodic-morenia-released
Gebin George
28 May 2018
2 min read
Save for later

ROS Melodic Morenia released

Gebin George
28 May 2018
2 min read
ROS is nothing but a middleware with a set of tools and software frameworks for building and stimulating robots. ROS follows a stable release cycle, coming with a new version every year on 23rd of May. ROS released its Melodic Morenia version this year on the said date, with a decent number of enhancements and upgrades. Following are the release notes: class_loader header deprecation class_loader’s headers has been renamed and the previous ones have been deprecated in an effort to bring them close to multi-platform support and its ROS 2 counterpart. You can refer to the migration script provided for the header replacements and PRs will be released for all the .packages in previous ROS distribution. Kdl_parser package enhancement Kdl_parser has now deprecated a method that was linked with tinyxml (which was already deprecated) The tinyxml replacement code is as follows: bool treeFromXml(const tinyxml2::XMLDocument * xml_doc, KDL::Tree & tree) The deprecated API will be removed in N-turle. OpenCV version update For standardization reason, the OpenCV usage version is restricted to 3.2. Enhancements in pluginlib Similar to class_loader, the headers were deprecated here as well, to bring them closer to multi-platform support. plugin_tool which was deprecated for years, has been finally removed in this version. For more updates on the packages of ROS, refer to ROS Wiki page.
Read more
  • 0
  • 0
  • 19576

article-image-github-deprecates-and-then-restores-network-graph-after-github-users-share-their-disapproval
Vincy Davis
02 May 2019
2 min read
Save for later

GitHub deprecates and then restores Network Graph after GitHub users share their disapproval

Vincy Davis
02 May 2019
2 min read
Yesterday, GitHub announced in a blog post that they are deprecating the Network Graph from the repository’s Insights panel and that visits to this page will be redirected to the forks page instead. Following this announcement, they removed the network graph. On the same day, however, they deleted the blog post and also added back the network graph. The network graph is one of the useful features for developers on GitHub. It is used to display the branch history of the entire repository network, including branches of the root repository and branches of forks that contain commits unique to the network. Users of GitHub were alarmed on seeing the blog post about the removal of network graph without any prior notification or provision of a suitable replacement. For many users, this meant a significant burden of additional work. https://twitter.com/misaelcalman/status/1123603429090373632 https://twitter.com/theterg/status/1123594154255187973 https://twitter.com/morphosis7/status/1123654028867588096 https://twitter.com/jomarnz/status/1123615123090935808 Following the backlash and requests to bring back the Graph Network, on the same day, the Community Manager of GitHub posted on its community forum, that they will be reverting this change, based on the users’ feedback. Later on, the blog post announcing the deprecation was removed and the network graph was back on its website. This has brought a huge sigh of relief amongst GitHub’s users. The feature is famous for checking the state of a repository and the relationship between active branches. https://twitter.com/dotemacs/status/1123851067849097217 https://twitter.com/AlpineLakes/status/1123765300862836737 GitHub has not yet officially commented on why they removed the network graph in the first place. A Reddit user has put up an interesting shortlist of suspicions: The cost-benefit analysis from "The Top" determined that the compute time for generating the graph was too expensive, and so they "moved" the feature to a more premium account. "Moved" could also mean unceremoniously kill off the feature because some manager thought it wasn't shiny enough. Microsoft buying GitHub made (and will continue to make) GitHub worse, and this is just a harbinger of things to come. DockerHub database breach exposes 190K customer data including tokens for GitHub and Bitbucket repositories Apache Software Foundation finally joins the GitHub open source community Microsoft and GitHub employees come together to stand with the 996.ICU repository
Read more
  • 0
  • 0
  • 19562

article-image-microsoft-releases-procdump-for-linux-a-linux-version-of-the-procdump-sysinternals-tool
Savia Lobo
05 Nov 2018
2 min read
Save for later

Microsoft releases ProcDump for Linux, a Linux version of the ProcDump Sysinternals tool

Savia Lobo
05 Nov 2018
2 min read
Microsoft developer, David Fowler revealed ‘ProcDump for Linux’, a Linux version of the ProcDump Sysinternals tool, over the weekend on November 3. ProcDump is a Linux reimagining of the classic ProcDump tool from the Sysinternals suite of tools for Windows. It provides a convenient way for Linux developers to create core dumps of their application based on performance triggers. Requirements for ProcDump The tool currently supports Red Hat Enterprise Linux / CentOS 7, Fedora 26, Mageia 6 and Ubuntu 14.04 LTS, with other versions being tested. It also supports gdb >= 7.6.1 and zlib (build-time only). Limitations of ProcDump Runs on Linux Kernels version 3.5+ Does not have full feature parity with Windows version of ProcDump, specifically, stay alive functionality, and custom performance counters Installing ProcDump ProcDump can be installed using two methods, first, Package Manager, which is a preferred method. The other one is via.deb package. To know more about ProcDump in detail visit its GitHub page. Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report ‘We are not going to withdraw from the future’ says Microsoft’s Brad Smith on the ongoing JEDI bid, Amazon concurs Microsoft bring an open-source model of Component Firmware Update (CFU) for peripheral developers
Read more
  • 0
  • 0
  • 19558
article-image-bitbucket-goes-down-for-over-an-hour
Natasha Mathur
25 Oct 2018
2 min read
Save for later

BitBucket goes down for over an hour

Natasha Mathur
25 Oct 2018
2 min read
Bitbucket, a web-based version control repository that allows users to manage and share their Git repositories as a team, suffered an outage today. As per the Bitbucket’s incident page, the outage started at 8 AM UTC today and lasted for over an hour, till 9:02 AM UTC,  before finally getting back to its normal state. The Bitbucket team tweeted regarding the outage, saying: https://twitter.com/BitbucketStatus/status/1055372361036312576 It was only earlier this week when GitHub went down for a complete day due to failure in its data storage system. In the case of GitHub, there was no obvious way to tell if the site was down as the website’s backend git services were working. However, users were not able to log in, outdated files were being served, branches went missing, and they were unable to submit Gists, bug reports, posts, etc among other related issues. Bitbucket, however, was completely broken during the entirety of the outage, as all the services from pipelines to actually getting at the code were down. It was evidently clear that the site was not working as it showed the “Internal Server” error. BitBucket hasn’t spoken out regarding the real cause of the outage, however, as per the BitBucket status page, the site had been experiencing elevated error rates, and degraded BitBucket functionality, for the past two days. This could be the possible reason for the outage. After the outage was over, Bitbucket tweeted about the recovery, saying: https://twitter.com/BitbucketStatus/status/1055384158392922112 As the services were down, developers and coders around the world took to Twitter to vent their frustration. https://twitter.com/HeinrichCoetzee/status/1055370890127519744 https://twitter.com/montakurt/status/1055372412651495424 https://twitter.com/CapAmericanec/status/1055370560606294016   Developers rejoice! Github announces Github Actions, Github connect and much more to improve development workflows GitHub is bringing back Game Off, its sixth annual game building competition, in November GitHub comes to your code Editor; GitHub security alerts now have machine intelligenc
Read more
  • 0
  • 0
  • 19553

article-image-qt-for-python-5-11-released
Pavan Ramchandani
18 Jun 2018
3 min read
Save for later

Qt for Python 5.11 released!

Pavan Ramchandani
18 Jun 2018
3 min read
The Qt team, in their blog, announced the official release of Qt with Python support. This is the first official of Qt framework with the support for Python and this release is tagged as Qt for Python 5.11. Previously  Python support for Qt developers was provided through the development of PySide module and now the work is said to have been done on PySide 2 to provide Qt for Python. However, Qt team has been working on the core Qt framework for quite some time to incorporate Python support and this is the first breakthrough in that direction. Adding to this, the Qt team has also informed that r version of Qt earlier than v5.11 will not support Python. In the release notes, the team has mentioned that the following versions of Qt will continue supporting this project and make the support for Python, stable going ahead. This is said to be the preview release, with a list of known issues for early adopters. The team is hoping to receive the feedback from the users so that it can make the binding more smooth and rectify the bugs. A lot of work has also gone into keeping the Qt syntax unchanged for flexible migration from C++, the de facto language for developing UI with Qt, to Python and the other way round. It mentions in the release blog, that the major roadblock in providing the Python binding for the C++ based Qt was the size of packages. This made the team to work on using external tools for Qt scripting with Python, which had resulted in the development of PySide in 2009. To extend the support for Python, the work has been done on C++ headers in Qt framework, so that the developers can write modules in Python. These efforts resulted in the latest PySide 2 which has very less overhead for using Python and Qt for GUI development. The Qt team has worked on developing the documentation for this and has provided examples enables you to understand the binding. Along with the Python binding for the core Qt framework, the team has also extended support for various Qt toolkits like Qtwidgets and QML to build interactive GUI with Qt and Python. For the early adopters of Qt for Python, to report a bug you use the Qt for Python project on bugreports.qt.io. The team can be reached on Freenode with #qt-pyside. Read more Qt 5.11 has arrived! WebAssembly comes to Qt. Now you can deploy your next Qt app in browser
Read more
  • 0
  • 0
  • 19553
Modal Close icon
Modal Close icon