Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Programming

573 Articles
article-image-facebook-is-reportedly-working-on-threads-app-an-extension-of-instagrams-close-friends-feature-to-take-on-snapchat
Amrata Joshi
02 Sep 2019
3 min read
Save for later

Facebook is reportedly working on Threads app, an extension of Instagram's 'Close friends' feature to take on Snapchat

Amrata Joshi
02 Sep 2019
3 min read
Facebook is seemingly working on a new messaging app called Threads that would help users to share their photos, videos, location, speed, and battery life with only their close friends, The Verge reported earlier this week. This means users can selectively share content with their friends while not revealing to others the list of close friends with whom the content is shared. The app currently does not display the real-time location but it might notify by stating that a friend is “on the move” as per the report by The Verge. How do Threads work? As per the report by The Verge,  Threads app appears to be similar to the existing messaging product inside the Instagram app. It seems to be an extension of the ‘Close friends’ feature for Instagram stories where users can create a list of close friends and make their stories just visible to them.  With Threads, users who have opted-in for ‘automatic sharing’ of updates will be able to regularly show their status updates and real-time information  in the main feed to their close friends.. The auto-sharing of statuses will be done using the mobile phone sensors.  Also, the messages coming from your friends would appear in a central feed, with a green dot that will indicate which of your friends are currently active/online. If a friend has posted a story recently on Instagram, you will be able to see it even from Threads app. It also features a camera, which can be used to capture photos and videos and send them to close friends. While Threads are currently being tested internally at Facebook, there is no clarity about the launch of Threads. Direct’s revamped version or Snapchat’s potential competitor? With Threads, if Instagram manages to create a niche around the ‘close friends’, it might shift a significant proportion of Snapchat’s users to its platform.  In 2017, the team had experimented with Direct, a standalone camera messaging app, which had many filters that were similar to Snapchat. But this year in May, the company announced that they will no longer be supporting Direct. Threads look like a Facebook’s second attempt to compete with Snapchat. https://twitter.com/MattNavarra/status/1128875881462677504 Threads app focus on strengthening the ‘close friends’ relationships might promote more of personal data sharing including even location and battery life. This begs the question: Is our content really safe? Just three months ago, Instagram was in the news for exposing personal data of millions of influencers online. The exposed data included contact information of Instagram influencers, brands and celebrities https://twitter.com/hak1mlukha/status/1130532898359185409 According to Instagram’s current Terms of Use, it does not get ownership over the information shared on it. But here’s the catch, it also states that it has the right to host, use, distribute, run, modify, copy, publicly perform or translate, display, and create derivative works of user content as per the user’s privacy settings. In essence, the platform has a right to use the content we post.  Facebook open-sources Hyperparameter autotuning for fastText to automatically find best hyperparameters for your dataset Twitter and Facebook removed accounts of Chinese state-run media agencies aimed at undermining Hong protests   Facebook must face privacy class action lawsuit, loses facial recognition appeal, U.S. Court of Appeals rules
Read more
  • 0
  • 0
  • 13015

article-image-cuda-10-1-released-with-new-tools-libraries-improved-performance-and-more
Amrata Joshi
28 Feb 2019
2 min read
Save for later

CUDA 10.1 released with new tools, libraries, improved performance and more

Amrata Joshi
28 Feb 2019
2 min read
Yesterday, the team at NVIDIA released CUDA 10.1 with a new lightweight GEMM library, new functionalities and performance updates to existing libraries, and improvements to the CUDA Graphs APIs. What’s new in CUDA 10.1? Now there are new encoding and batched decoding functionalities in nvJPEG. This release also features faster performance for a broad set of random number generators in cuRAND. In this release, there is improved performance and support for fork/join kernels in CUDA Graphs APIs. Compiler In this release, the CUDA-C and CUDA-C++ compiler, nvcc, are found in the bin/ directory. They are built on top of the NVVM optimizer, which itself is built on top of the LLVM compiler infrastructure. Developers who are willing to target NVVM directly can do so by using the Compiler SDK, which is available in the nvvm/directory. Tools There are new development tools available in the bin/ directory including, few IDEs like nsight (Linux, Mac), Nsight VSE (Windows) and debuggers like cuda-memcheck, cuda-gdb (Linux), Nsight VSE (Windows). The tools also include a few profilers and utilities. Libraries This release comes with cuBLASLt, a new lightweight GEMM library with a flexible API and tensor core support for INT8 inputs and FP16 CGEMM split-complex matrix multiplication. CUDA 10.1 also features selective eigensolvers SYEVDX and SYGVDX in cuSOLVER. Few of the available utility libraries in the lib/ directory (DLLs on Windows are in bin/) are cublas (BLAS), cublas_device (BLAS Kernel Interface), cuda_occupancy (Kernel Occupancy Calculation [header file implementation]), etc. To know more about this news in detail, check out the post by Nvidia. Implementing color and shape-based object detection and tracking with OpenCV and CUDA [Tutorial] ClojureCUDA 0.6.0 now supports CUDA 10 Stable release of CUDA 10.0 out, with Turing support, tools and library changes
Read more
  • 0
  • 0
  • 13008

article-image-microsoft-releases-the-python-language-server-in-visual-studio
Kunal Chaudhari
27 Jul 2018
3 min read
Save for later

Microsoft releases the Python Language Server in Visual Studio

Kunal Chaudhari
27 Jul 2018
3 min read
Last week Microsoft announced the release of Python Language Server which is a part of the July release for Python Extension for Visual Studio Code and will be released as a standalone product in the near future. Intellisense, Microsoft’s code analysis, and suggestion tool have been supporting Python since 2011, but this language support can now be extended to other tools using the Microsoft Language Server. Intellisense and Language Server Demystified IntelliSense is the general term for a number of features like List Members, Parameter Info, Quick Info, and Complete Word. These features help developers to learn more about the code they are using and to keep track of the parameters. With Intellisense, Microsoft has long featured the completion feature that makes writing code faster and less error-prone. Many aspects of IntelliSense are language-specific and many of its features are powered by a language server. Adding all these smart features in IntelliSense takes massive efforts and traditionally this effort is repeated for each development tool, as each tool provides different APIs for implementing the same feature. This effort can be significantly reduced with the help of a language server, as they provide these language-specific features to different tools with the help of a standard protocol known as Language Server Protocol (LSP). This way, a single Language Server can be re-used in multiple development tools, which in turn can support multiple languages with minimal effort. Benefits of the Python Language Server Python IntelliSense has been supported in Visual Studio since 2011 and is one of the most downloaded extensions, but only limited to Visual Studio developers. The Visual Studio team at Microsoft plan to separate the Python IntelliSense from Visual Studio and make it available as a standalone program using the language server protocol. Steve Dower, a developer at Microsoft, wrote in his blog that “Having a standalone, cross-platform language server means that we can continue to innovate and improve on our IntelliSense experience for Python developers in both Visual Studio and Visual Studio Code at the same time”. The July release of Visual Studio Codes Python extension includes features such as: Syntax errors will appear as the code is typed Improved performance for analyzing workspaces and presenting completions The ability to detect syntax errors within the entire workspace Faster startup times and imports Better handling for several language constructs The standalone release of the Python Language Server will be released in a few months, till then you can check out VS Code release announcement for more information. Microsoft’s GitHub acquisition is good for the open source community Microsoft launches a free version of its Teams app to take Slack head on Microsoft’s Brad Smith calls for facial recognition technology to be regulated
Read more
  • 0
  • 0
  • 12979

article-image-fedora-31-will-now-come-with-mono-5-to-offer-open-source-net-support
Amrata Joshi
12 Mar 2019
2 min read
Save for later

Fedora 31 will now come with Mono 5 to offer open-source .NET support

Amrata Joshi
12 Mar 2019
2 min read
Fedora has always been shipping Mono 4.8, the open source development platform for building cross-platform applications, with each Fedora release. Even after shipping Mono 5.0 in May 2017, the company still continued with Mono 4.8. But it seems the idea will be changing now with the release of Fedora 31. With Fedora 31, the team at Fedora is finally planning to switch to Mono 5.20 which is expected to release later this year. An effort was made in the past few months by the Fedora team to build Mono from source. The build was also done for Debian using msc instead of csc and the reference assemblies were rebuilt from source. In case of Mono, it requires itself to build. The Mono version 4.8 which is included in Fedora currently, is too old to build version 5.20. Currently, the team has been using monolite and a little version of mono compiler, .NET 4.7.1 reference assemblies for first build time. The sources for the required patch files are maintained on Github. The transition from Mono 4 to Mono 5 was on halt because of the changes required in their compiler stack and its dependency upon some binary references. These binaries are available as a source but treated as pre-compiled binaries for simplification and speed. The Fedora developers are now working towards getting Mono 5 into Fedora 31. This will also let the cross-platform applications that are relying upon Microsoft's .NET framework 4.7 and later to now work. Mono 4.8 is also not compatible for PowerPC 64-bit but it is expected that Mono 5 will be. To know more about this news, check out the change proposal. Fedora 29 released with Modularity, Silverblue, and more Swift is now available on Fedora 28 Fedora 29 beta brings Modularity, GNOME 3.30 support and other changes
Read more
  • 0
  • 0
  • 12968

article-image-ipython-7-0-releases-with-asyncio-integration-and-new-async-libraries
Natasha Mathur
28 Sep 2018
2 min read
Save for later

IPython 7.0 releases with AsyncIO Integration and new Async libraries

Natasha Mathur
28 Sep 2018
2 min read
IPython team released version 7.0 of IPython, yesterday. IPython is a powerful Python interactive shell with features such as advanced tab completion, syntactic coloration, and more. IPython 7.0 explores new features such as AsyncIO integration, new Async libraries, and Async support in Notebooks. IPython (Interactive Python) provides a rich toolkit for interactive computing in multiple programming languages. It’s the Jupyter kernel for Python used by millions of users. Let’s discuss the key features in IPython 7.0 release. AsyncIO Integration IPython 7.0 comes with the integration of IPython and AsyncIO. This means that you don’t have to import or learn about asyncIO anymore. AsyncIO is a library which lets you write concurrent code using the async/await syntax. The asyncIO library is used as a foundation for multiple Python asynchronous frameworks providing high-performance network, web-servers, database connection libraries, distributed task queues, etc. Just remember that asyncIO is an async function, it won’t magically make your code faster but will make it easier to write. New Async Libraries (Curio and Trio integration) Python consists of keywords async and await. This helps simplify the use of asynchronous programming and the standardization around asyncIO. It also allows experimentation with the new paradigms for asynchronous libraries. Now, two new Async Libraries namely Curio and Trio, have been added in IPython 7.0. Both of these libraries explore ways to write asynchronous programs. They also explore how to use async, awaits, and coroutines when starting from a blank slate. Curio is a library which helps perform concurrent I/O and common system programming tasks. It makes use of the Python coroutines and the explicit async/await syntax. Trio is an async/await-native I/O library for Python. It lets you write programs that do multiple things at the same time with parallelized I/O. Async support in Notebooks Async code will now work in a notebook when using ipykernel for Jupyter users. With IPython 7.0, async will work with all the frontends that support the Jupyter Protocol, including the classic Notebook, JupyterLab, Hydrogen, nteract desktop, and nteract web. The default code will run in the existing asyncIO/tornado loop that runs the kernel. For more information, check out the official release notes. Make Your Presentation with IPython How to connect your Vim editor to IPython Increase your productivity with IPython
Read more
  • 0
  • 0
  • 12966

article-image-gitlab-11-3-released-with-support-for-maven-repositories-protected-environments-and-more
Prasad Ramesh
24 Sep 2018
2 min read
Save for later

GitLab 11.3 released with support for Maven repositories, protected environments and more

Prasad Ramesh
24 Sep 2018
2 min read
GitLab 11.3 was released on Saturday with support for Maven repositories, Code Owners, Protected Environments and other changes. These new added features help in automation of controls around environments and code while also providing further efficiencies for Java developers. Maven repositories in GitLab 11.3 Maven repositories are now directly available in GitLab. This gives Java developers a secure, standardized way to share version control in Maven libraries. It also saves time by reusing these libraries across projects but it is available only on GitLab premium. Lower-level services can now have their packaged libraries published to their project’s Maven repository. They can share a simple XML snippet with other teams to utilize the library while Maven and GitLab do the rest. Code owners and protected environments GitLab Starter now supports assignment of Code Owners to files indicating the appropriate team members contributing to the code. This is a primer for future releases, which will enforce internal controls at the code level. Operators can also use Protected Environments for setting permissions to determine which users can deploy code to production environments. This significantly reduces the risk of an unintended commit. This feature is also available only on premium. Epic forecasting with integrated milestone dates The new Portfolio Management feature in GitLab Ultimate forecasts an epic's start and end dates automatically based on the milestone dates of its issues. Portfolio managers will be able to compare their planned start and end dates against the scheduled work enabling faster decisions on delivery and plan adjustments. In older versions, fixed values could be set for the planned start and end dates of an epic. This was useful for high-level planning of epics. However, as issues are attached to the epic and the issues are scheduled for work with actual milestones, it is useful to have epic dates reflecting those milestones. In this version, the static values for the dates can be changed to a dynamic value called ‘From milestones’. The dynamic version of epic planned end dates are analogous. This is a useful feature to have if you want seamless transition from high-level, top-down planning to micro-level, and bottom-up planning. For more information, visit the GitLab website. GitLab raises $100 million, Alphabet backs it to surpass Microsoft’s GitHub Gitlab 11.2 releases with preview changes in Web IDE, Android Project Import and more GitLab is moving from Azure to Google Cloud in July
Read more
  • 0
  • 0
  • 12962
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-rust-and-web-assembly-announce-wasm-bindgen-0-2-16-and-the-first-release-of-wasm-bindgen-futures
Savia Lobo
14 Aug 2018
3 min read
Save for later

Rust and Web Assembly announce ‘wasm-bindgen 0.2.16’ and the first release of ‘wasm-bindgen-futures’

Savia Lobo
14 Aug 2018
3 min read
Yesterday, the Rust and Web Assembly community made two announcements. Firstly, it released the ‘wasm-bindgen’ 0.2.16 version and second, it published the first release of ‘wasm-bindgen-futures’. wasm-bindgen facilitates high-level communication between JavaScript and Rust compiled to WebAssembly. It allows one to speak in terms of Rust structs, JavaScript classes, strings, etc.,instead of only the integers and floats supported by WebAssembly’s raw calling convention. The wasm-bindgen is designed to support the upcoming “Host Bindings” proposal, which will eliminate the need for any kind of JavaScript shim functions between WebAssembly functions and native DOM functions. What’s new in wasm-bindgen 0.2.16 Added features Added the wasm_bindgen::JsCast trait, as described in RFC #2. Added support for receiving Option<&T> parameters from JavaScript in exported Rust functions and methods and for receiving Option<u32> and other option-wrapped scalars. Added reference documentation to the guide for every #[wasm_bindgen] attribute and how it affects the generated bindings. Changes in this version 0.2.16 Restructured the guide's documentation on passing JS closures to Rust, and Rust closures to JS. Also improved  the guide's documentation on using serde to serialize complex data to JsValue and deserialize JsValues back into complex data. Static methods are now always bound to their JS class, as is required for Promise's static methods. The newly released wasm-bindgen-futures The wasm-bindgen-futures is a crate that bridges the gap between a Rust Future and a JavaScript Promise. It provides two conversions: From a JavaScript Promise into a Rust Future. From a Rust Future into a JavaScript Promise. The two main interfaces in this crate are: JsFuture The JsFuture is constructed with a Promise and can then be used as a Future<Item = JsValue, Error = JsValue>. This Rust future will resolve or reject with the value coming out of the Promise Future_to_promise Future_to_promise interface converts a Rust Future<Item = JsValue, Error = JsValue> into a JavaScript Promise. The future's result will translate to either a rejected or resolved Promise in JavaScript. These two items provide enough of a bridge to interoperate the two systems and make sure that Rust/JavaScript can work together with asynchronous and I/O work. To know more about wasm-bindgen version 0.2.16 and wasm-bindgen-futures visit its GitHub page. Warp: Rust’s new web framework for implementing WAI (Web Application Interface) Rust 1.28 is here with global allocators, nonZero types and more Say hello to Sequoia: a new Rust based OpenPGP library to secure your apps
Read more
  • 0
  • 0
  • 12898

article-image-twitter-announces-to-test-hide-replies-feature-in-the-us-and-japan-after-testing-it-in-canada
Amrata Joshi
20 Sep 2019
4 min read
Save for later

Twitter announces to test ‘Hide Replies’ feature in the US and Japan, after testing it in Canada

Amrata Joshi
20 Sep 2019
4 min read
Yesterday, the team at Twitter announced to test a new feature called “Hide Replies” in the US and Japan after testing it in Canada. Twitter’s Hide Replies feature lets users hide all those unwanted trolls, abuse, bully and replies on their tweet. The company aims towards civilized conversations on Twitter and to give more control to the users. Users can now decide which reply will be hidden from other users but those who choose to view the hidden replies will still be able to see them by clicking on an icon that would bring up all the hidden tweets. Users can hide replies on both the app and desktop versions of the website.  Observations from the Canadian ‘Hide Replies’ feature test In July this year, the The Twitter team tested out the ‘Hide Replies’ feature in Canada and tried to understand how conversations on the platform change when a person (who starts a conversation) hides the replies.  The team observed that users often hide those replies that they think are not relevant, unintelligible or abusive. According to their survey, the ones who used this feature found it helpful. Also, the users were more likely to reconsider their interactions when their tweets were hidden. Around 27% of the users who had their tweets hidden thought of reconsidering their interactions with others in the future. Hiding someone’s replies can also lead to confusion as it could be misunderstood, so Twitter notifies the user if they wish to block the user. The official post reads, “People were concerned hiding someone’s reply could be misunderstood and potentially lead to confusion or frustration. As a result, now if you tap to hide a Tweet, we’ll check in with you to see if you want to also block that account.” According to the team, the Canadian test showed positive results as the feature helped users have better conversations. In an announcement regarding the feature’s Canada launch, the company said, “Everyday, people start important conversations on Twitter, from #MeToo and #BlackLivesMatter, to discussions around #NBAFinals or their favorite television shows. These conversations bring people together to debate, learn, and laugh. That said we know that distracting, irrelevant, and offensive replies can derail the discussions that people want to have. Ultimately, the success of ‘hide replies’ will depend on how people use it, but it could mean friendlier — and more filtered — conversations.” Twitter’s Hide Replies feature: will it really improve conversations? The Hide Replies feature is a great addition to the list of the block and mute options on Twitter but it could possibly turn into a slight restriction on freedom of speech. In case, the replies weren't abusive or offensive but are strong views about a subject and the author still decides to hide that reply, then the user who replied might not understand the reason behind hiding the reply. But the good thing is that users can opt to still see the hidden replies. So in this case, the hidden responses aren’t being completely silenced but it will now take an extra click to view them. Also, if the platform still shows the hidden replies then the motive of hiding the replies fails there itself. While it is still not clear as to how will Twitter curtail abusive comments or bullies on the Twitter thread with this feature as it doesn’t delete them but simply hide them. Few Twitter users are not happy with this feature and think it is irrelevant if the user first hides the replies and than again it will appear on clicking the option to see the hidden replies. https://twitter.com/QWongSJ/status/1174795321211158528 https://twitter.com/scott_satzer/status/1174890804143374336 https://twitter.com/CartridgeGames/status/1174857548777885697 https://twitter.com/camimosas/status/1174850022694952960 https://twitter.com/KyleTWN/status/1174828502769471488 https://twitter.com/iFireMonkey/status/1174791634736861207 To know more about this news, check out the official post. Other interesting news in programming Dart 2.5 releases with the preview of ML complete, the dart:ffi foreign function interface and improvements in constant expressions Microsoft releases Cascadia Code version 1909.16, the latest monospaced font for Windows Terminal and Visual Studio Code DevOps platform for coding, GitLab reached more than double valuation of $2.75 billion than its last funding and way ahead of its IPO in 2020  
Read more
  • 0
  • 0
  • 12862

article-image-github-has-passed-an-incredible-100-million-repositories
Richard Gall
12 Nov 2018
2 min read
Save for later

GitHub has passed an incredible 100 million repositories

Richard Gall
12 Nov 2018
2 min read
It has been a big year for GitHub. The code sharing platform has this year celebrated its 10th birthday, been bought by Microsoft for an impressive $7.5 billion, and has now reached an astonishing 100 million repositories. While there will be rumblings of discontent following the huge Microsoft acquisition, it doesn't look like threats to leave GitHub have come to fruition. True, it has only been a matter of weeks since Microsoft finally took over, but there are no signs that GitHub is losing favor with developers. 1 in 3 of all GitHub repositories were created in 2018 According to GitHub, 1 in 3 of the 100 million repositories were created in 2018. That demonstrates the astonishing growth of the platform, and just how embedded it is within the day to day life of software engineers. This is further underlined by more data in GitHub's Octoverse report, published in October. "We've seen more new accounts in 2018 so far than in the first six years of GitHub combined," the report states. Perhaps the new relationship with Microsoft has actually helped push GitHub from strength to strength - MicrosoftDocs/azure-docs is the fastest growing repository in 2018. Of course, some credit should probably go to Microsoft as well - the organization has done a lot to change its image and ethos, becoming much more friendly towards open source software. Meanwhile, at Packt, we've been delighted to play a small part in helping GitHub get to its 100 million milestone. Earlier this year we hit 2,000 project repos.
Read more
  • 0
  • 0
  • 12857

article-image-qt-creator-4-8-beta-released-adds-language-server-protocol
Prasad Ramesh
12 Oct 2018
2 min read
Save for later

Qt creator 4.8 beta released, adds language server protocol

Prasad Ramesh
12 Oct 2018
2 min read
The Qt team announced the release of Qt creator 4.8 beta yesterday. It includes generic programming language support and some more C++ experimental features since 4.7. Generic programming languages in Qt creator 4.8 beta In Qt Creator 4.8 Beta experimental support for language server protocol (LSP) is introduced. Many programming languages have a language server, with Go also having plans to include it. An LSP provides features like auto code complete and reference finding in IDEs. Addition of LSP means that by providing a client for the language server protocol, Qt Creator gets some support for many programming languages. Currently the Qt Creator supports code completion, highlighting of the symbol under the cursor, and jumping to the symbol definition. It also integrates diagnostics from the language server. Highlighting and indentation are still provided by the generic highlighter. The client is tested with Python for the most part. Currently, there is no support for language servers requiring special handling. C++ support There are some C++ experimental features add in this release. Editing compilation databases A compilation database is a list of files and compiler flags used to compile them. You can now open a compilation database as a project solely for editing and navigating code. You can try it by enabling the CompilationDatabaseProjectManager plugin. Clang format based indentation Auto-indentation is done via LibFormat which is the backend used by Clang format. To try this, enable the ClangFormat plugin. Cppcheck diagnostics The diagnostics generated by the Cppcheck tool is integrated into the editor. Enable the Cppcheck plugin to use it. In addition to the many fixes, the Clang code model can now jump to the symbol indicated by the auto keyword. This also allows to generate a compilation database from the information the mode model has. This can be done via Build | Generate Compilation Database. Debugging Now there is support for running multiple debuggers on one or more executables simultaneously. When multiple debuggers are running, you can switch between them with a new drop-down menu in Debug mode. More about various improvements and fixes can be found in the changelog. For further details, visit the Qt Blog. Qt creator 4.8 can be downloaded from the Qt website. Qt 3D Studio 2.1 released with new sub-presentations, scene preview, and runtime improvements How to create multithreaded applications in Qt How to Debug an application using Qt Creator
Read more
  • 0
  • 0
  • 12853
article-image-microsoft-releases-typescript-3-4-with-an-update-for-faster-subsequent-builds-and-more
Bhagyashree R
01 Apr 2019
3 min read
Save for later

Microsoft releases TypeScript 3.4 with an update for faster subsequent builds, and more

Bhagyashree R
01 Apr 2019
3 min read
Last week, Daniel Rosenwasser, Program Manager for TypeScript, announced the release of TypeScript 3.4. This release comes with faster subsequent builds with the ‘--incremental’ flag, higher order type inference from generic functions, type-checking for globalThis, and more. Following are some of the updates in TypeScript 3.4: Faster subsequent builds TypeScript 3.4 comes with the ‘--incremental’ flag, which records the project graph from the last compilation. So, when TypeScript is invoked with the ‘--incremental’ flag set to ‘true’, it will check for the least costly way to type-check and emit changes to a project by referring to the saved project graph. Higher order type inference from generic functions This release comes with various improvements around inference, one of the main being functions inferring types from other generic functions. At the time of type argument inference, TypeScript will now propagate the type parameters from generic function arguments onto the resulting function type. Updates in ReadonlyArray and readonly tuples Now, using read-only array-like types is much easier. This release introduces a new syntax for ReadonlyArray that uses a new readonly modifier for array types: function foo(arr: readonly string[]) { arr.slice();        // okay arr.push("hello!"); // error! } TypeScript 3.4 also adds support for readonly tuples. To make a tuple readonly, you just have to prefix it with the readonly keyword. Type-checking for globalThis This release supports type-checking ECMAScript’s new globalThis, which is a global variable that refers to the global scope. With globalThis, you can access the global scope that can be used across different environments. The globalThis variable provides a standard way for accessing the global scope which can be used across different environments. Breaking changes As this release introduces few updates in inference, it does come with some breaking changes: TypeScript now uses types that flow into function calls to contextually type function arguments. Now, the type of top-level ‘this’ is typed as ‘typeof globalThis’ instead of ‘any’. As a result, users might get some errors for accessing unknown values on ‘this’ under ‘noImplicitAny’. TypeScript 3.4 correctly measures the variance of types declared with ‘interface’ in all cases. This introduces an observable breaking change for interfaces that used a type parameter only in keyof. To know the full list of updates in TypeScript 3.4, check out the official announcement. An introduction to TypeScript types for ASP.NET core [Tutorial] Typescript 3.3 is finally released! Yarn releases a roadmap for Yarn v2 and beyond; moves from Flow to Typescript
Read more
  • 0
  • 0
  • 12818

article-image-rust-1-30-releases-with-procedural-macros-and-improvements-to-the-module-system
Sugandha Lahoti
26 Oct 2018
3 min read
Save for later

Rust 1.30 releases with procedural macros and improvements to the module system

Sugandha Lahoti
26 Oct 2018
3 min read
Yesterday, the rust team released a new version of the Rust systems programming language known for its safety, speed, and concurrency. Rust 1.30 comes with procedural macros, module system improvements, and more. It has been an incredibly successful year for the Rust programming language in terms of its popularity. It jumped from being the 46th most popular language on GitHub last year to the 18th position this year. The 2018 survey of the RedMonk Programming Language Rankings marked the entry of Rust in their Top 25 list. It topped the list of the most loved programming language among the developers who took the Stack overflow survey of 2018 survey for a straight third year in the row. Still not satisfied? Here are 9 reasons why Rust programmers love Rust. Key improvements in Rust 1.30 Procedural macros are now available Procedural macros allow for more powerful code generation. Rust 1.30 introduces two different kinds of advanced macros, “attribute-like procedural macros” and “function-like procedural macros.” Attribute-like macros are similar to custom derive macros, but instead of generating code for only the #[derive] attribute, they allow you to create new, custom attributes of your own. They’re also more flexible: derive only works for structs and enums, but attributes can go on other places, like functions. Function-like macros define macros that look like function calls. Developers can now also bring macros into scope with the use keyword. Updates to the Module system The module system has received significant improvements to make it more straightforward and easy to use. In addition to bringing macros into scope, the use keyword has two other changes. First, external crates are now in the prelude. Previously, on moving a function to a submodule, developers would have some of their code break. Now, on moving a function, it will check the first part of the path and see if it’s an extern crate, and if it is, it will use it regardless of where developers are in the module hierarchy. Second, use supports bringing items into scope with paths starting with crate. Previously, paths specified after use would always start at the crate root, but paths referring to items directly would start at the local path, meaning the behavior of paths was inconsistent. Now, the crate keyword at the start of the path will indicate if developers would like the path to start at their crate root. These changes combined will lead to a more straightforward understanding of how paths resolve. Other changes Developers can now use keywords as identifiers using the raw identifiers syntax (r#), e.g. let r#for = true; Using anonymous parameters in traits is now deprecated with a warning and will be a hard error in the 2018 edition. Developers can now catch visibility keywords (e.g. pub, pub(crate)) in macros using the vis specifier. Non-macro attributes now allow all forms of literals, not just strings. Previously, you would write #[attr("true")], now you can write #[attr(true)]. Developers can now specify a function to handle a panic in the Rust runtime with the #[panic_handler] attribute. These are just a select few updates. For more information, and code examples, go through the Rust Blog. 3 ways to break your Rust code into modules Rust as a Game Programming Language: Is it any good? Rust 2018 RC1 now released with Raw identifiers, better path clarity, and other changes
Read more
  • 0
  • 0
  • 12779

article-image-the-january-2019-release-of-visual-studio-code-v1-31-is-out
Prasad Ramesh
08 Feb 2019
2 min read
Save for later

The January 2019 release of Visual Studio code v1.31 is out

Prasad Ramesh
08 Feb 2019
2 min read
The January 2019 release of Visual Studio code v1.31 is now available. This update brings Tree UI improvements, updated to the main menu, no reload on extension installation and other changes. Features of Visual Studio code v1.31 No more reloads on installing extensions This was one of the most requested features in the VS community. Now you don’t have to reload VS code whenever you install a new extension. Reload is not needed even when you uninstall an unactivated extension. Improvements to the Tree UI There is a new tree widget based on the already existing list widget. This tree UI was adopted in File Explorer, all debug trees, search, and peek references. Tree UI brings features like: Better keyboard navigation for faster access Hierarchical select all in a tree starting from the inner node the cursor is on Customizable indentation for trees Expand/collapse all tree nodes recursively Horizontal scrolling Improvements to menus There are more navigation actions in the Go menu so that they can be discovered easily. The cut command is now available on the Explorer context menu. Changes in the Editor Text selection is smarter. Search history is shown below the search bar in the References view. Long descriptions can be written using string arrays. Semantic selection In HTML, CSS/LESS/SCSS, and JSON semantic selection is now available. Reflow support in integrated terminal The terminal will now automatically wrap and unwrap text whenever it’s resized. New input variable The input variables were introduced in the previous milestone. In Visual Studio code 1.31, there is a new input variable called command. It runs an arbitrary command when an input variable is interpolated. Updated extension API documentation The VS Code API documentation was rewritten and then moved to its own table of contents. For more details on the improvements in Visual Studio code 1.31 January 2019, visit the release notes. Code completion suggestions via IntelliCode comes to C++ in Visual Studio 2019 Microsoft Connect(); 2018 Azure updates: Azure Pipelines extension for Visual Studio Code, GitHub releases and much more! Neuron: An all-inclusive data science extension for Visual Studio
Read more
  • 0
  • 0
  • 12779
article-image-facebook-releases-pythia-a-deep-learning-framework-for-vision-and-language-multimodal-research
Amrata Joshi
22 May 2019
2 min read
Save for later

Facebook releases Pythia, a deep learning framework for vision and language multimodal research

Amrata Joshi
22 May 2019
2 min read
Yesterday, the team at Facebook released Pythia, a deep learning framework that supports multitasking in the vision and language multimodal research. Pythia is built on the open-source PyTorch framework and enables researchers to easily build, reproduce, and benchmark AI models. https://twitter.com/facebookai/status/1130888764945907712 It is designed for vision and language tasks, such as answering questions that are related to visual data and automatically generates image captions. This framework also incorporates elements of Facebook’s winning entries in recent AI competitions including the VQA Challenge 2018 and Vizwiz Challenge 2018. Features of Pythia Reference implementations: Pythia references implementations to show how previous state-of-the-art models achieved related benchmark results. Performance gauging: It also helps in gauging the performance of new models. Multitasking: Pythia supports multitasking and distributed training. Datasets: It also includes support for various datasets built-in including VizWiz, VQA,TextVQA and VisualDialog. Customization: Pythia features custom losses, metrics, scheduling, optimizers, tensorboard as per the needs of the customers. Unopinionated: Pythia is unopinionated about the dataset and model implementations that are built on top of it. The goal of the team behind Pythia is to accelerate the AI models and their results and further make it easier for the AI community to build on, and benchmark against, successful systems. The team hopes that Pythia will also help researchers to develop adaptive AI that synthesizes multiple kinds of understanding into a more context-based, multimodal understanding. The team also plans to continue adding tools, data sets, tasks, and reference models. To know more about this news, check out the official Facebook announcement. Facebook tightens rules around live streaming in response to the Christchurch terror attack Facebook again, caught tracking Stack Overflow user activity and data Facebook bans six toxic extremist accounts and a conspiracy theory organization  
Read more
  • 0
  • 0
  • 12770

article-image-deepcode-the-ai-startup-for-code-review-raises-4m-seed-funding-will-be-free-for-educational-use-and-enterprise-teams-with-30-developers
Vincy Davis
06 Aug 2019
3 min read
Save for later

DeepCode, the AI startup for code review, raises $4M seed funding; will be free for educational use and enterprise teams with 30 developers

Vincy Davis
06 Aug 2019
3 min read
Today, Deepcode, the tool that uses artificial intelligence (AI) to help developers write better code, raised $4M in seed funding to expand it’s machine learning systems for code reviews. Deepcode plans to expand its supported list of languages (by including C#, PHP, and C/C++), improve the scope of code recommendations, and also grow the team internationally. It has also been revealed that Deepcode is working on its first integrated developer environment (IDE) project. The funding round was conducted by Earlybed, and the participants were 3VC and Btov Partners, DeepCode’s existing investor. DeepCode has also announced a new pricing structure. Previously, it was only free for open source software development projects. Today, it announced that it will also be free for educational purposes and for enterprise teams with 30 developers. https://twitter.com/DeepCodeAI/status/1158666106690838528 Launched in 2016, DeepCode reviews bugs, alerts about critical vulnerabilities, and style violations in the earlier stages of software development. Currently, DeepCode supports Java, JavaScript, and Python languages. When a developer links their Github or Bitbucket accounts to DeepCode, the DeepCode bot processes millions of commits in the available open source software projects and highlights broken codes that can cause compatibility issues. In a statement to Venturebeat, Paskalev says that DeepCode saves 50% of developers time, spent on finding bugs. Read Also: Thanks to DeepCode, AI can help you write cleaner code Earlybird co-founder and partner, Christian Nagel says, “DeepCode provides a platform that enhances the development capabilities of programmers. The team has a deep scientific understanding of code optimization and uses artificial intelligence to deliver the next breakthrough in software development.” Many open source projects have been getting major investments from tech companies lately. Last year, the software giant Microsoft acquired the open source code platform giant GitHub for $7.5 billion. Another popular platform for distributed version control and source code management GitLab also raised a $100 million Series D funding. With the software industry growing, the amount of codes written has increased to a great extent thus requiring more testing and debugging. DeepCode receiving funds is definitely good news for the developer community. https://twitter.com/andreas_herzog/status/1158666757588115456 https://twitter.com/evanderburg/status/1158710341963935745 Facebook research suggests chatbots and conversational AI are on the verge of empathizing with humans Pluribus, an AI bot built by Facebook and CMU researchers, has beaten professionals at six-player no-limit Texas Hold ’Em Poker Virality of fake news on social media: Are weaponized AI bots to blame, questions Destin Sandlin
Read more
  • 0
  • 0
  • 12766
Modal Close icon
Modal Close icon