Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - AI Tools

89 Articles
article-image-the-future-of-data-analysis-with-pandasai
Gabriele Venturi
06 Oct 2023
6 min read
Save for later

The Future of Data Analysis with PandasAI

Gabriele Venturi
06 Oct 2023
6 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionData analysis often involves complex, tedious coding tasks that make it seem reserved only for experts. But imagine a future where anyone could gain insights through natural conversations - where your data speaks plainly instead of through cryptic tables.PandasAI makes this future a reality. In this comprehensive guide, we'll walk through all aspects of adding conversational capabilities to data analysis workflows using this powerful new library. You'll learn:● Installing and configuring PandasAI● Querying data and generating visualizations in plain English● Connecting to databases, cloud storage, APIs, and more● Customizing PandasAI config● Integrating PandasAI into production workflows● Use cases across industries like finance, marketing, science, and moreFollow along to master conversational data analysis with PandasAI!Installation and ConfigurationInstall PandasAILet's start by installing PandasAI using pip or poetry.To install with pip:pip install pandasaiMake sure you are using an up-to-date version of pip to avoid any installation issues.For managing dependencies, we recommend using poetry:# Install poetry pip install --user poetry # Install pandasai poetry add pandasaiThis will install PandasAI and all its dependencies for you.For advanced usage, install all optional extras:poetry add pandasai –all-extrasThis includes dependencies for additional capabilities you may need later like connecting to databases, using different NLP models, advanced visualization, etc.With PandasAI installed, we are ready to start importing it and exploring its conversational interface!Import and Initialize PandasAILet's initialize a PandasAI DataFrame from a CSV file:from pandasai import SmartDataframe df = SmartDataframe("sales.csv")This creates a SmartDataFrame that wraps the underlying Pandas DataFrame but adds conversational capabilities.We can customize initialization through configuration options:from pandasai.llm import OpenAI llm = OpenAI(“<your api key>”) config = { "llm": } df = SmartDataFrame("sales.csv", config=config)This initializes the DataFrame using OpenAI model.For easy multi-table analysis, use SmartDatalake:from pandasai import SmartDatalake dl = SmartDatalake(["sales.csv", "inventory.csv"])SmartDatalake conversates across multiple related data sources.We can also connect to live data sources like databases during initialization:from pandasai.connectors import MySQLConnector mysql_conn = MySQLConnector(config={ "host": "localhost", "port": 3306, "database": "mydb", "username": "root", "password": "root",   "table": "loans", }) df = SmartDataframe(mysql_conn)This connects to a MySQL database so we can analyze the live data interactively.Conversational Data ExplorationAsk Questions in Plain EnglishThe most exciting part of PandasAI is exploring data through natural language. Let's go through some examples!Calculate totals:df.chat("What is the total revenue for 2022?") # Prints revenue totalFilter data:df.chat("Show revenue for electronics category") # Filters and prints electronics revenueAggregate by groups:df.chat("Break down revenue by product category and segment") # Prints table with revenue aggregated by category and segmentVisualize data:df.chat("Plot monthly revenue over time") # Plots interactive line chartAsk for insights:df.chat("Which segment has fastest revenue growth?") # Prints segments sorted by revenue growthPandasAI understands the user's questions in plain English and automatically generates relevant answers, tables and charts.We can ask endless questions and immediately get data-driven insights without writing any SQL queries or analysis code!Connect to Data SourcesA key strength of PandasAI is its broad range of built-in data connectors. This enables conversational analytics on diverse data sources.Databasesfrom pandasai.connectors import PostgreSQLConnector pg_conn = PostgreSQLConnector(config={ "host": "localhost",   "port": 5432,   "database": "mydb",   "username": "root",   "password": "root",   "table": "payments", }) df = SmartDataframe(pg_conn) df.chat("Which products had the most orders last month?")Finance Datafrom pandasai.connectors import YahooFinanceConnector yf_conn = YahooFinanceConnector("AAPL") df = SmartDataframe(yf_conn) df.chat("How did Apple stock perform last quarter?")The connectors provide out-of-the-box access to data across domains for easy conversational analytics.Advanced UsageCustomize ConfigurationWhile PandasAI is designed for simplicity, its architecture is customizable and extensible.We can configure aspects like:Language ModelUse different NLP models:from pandasai.llm import OpenAI, VertexAI df = SmartDataframe(data, config={"llm": VertexAI()})Custom InstructionsAdd data preparation logic:config["custom_instructions"] = """ Prepare data: - Filter outliers - Impute missing valuesThese options provide advanced control for tailored workflows.Integration into PipelinesSince PandasAI is built on top of Pandas, it integrates smoothly into data pipelines:import pandas as pd from pandasai import SmartDataFrame # Load raw data data = pd.read_csv("sales.csv") # Clean data clean_data = clean_data(data) # PandasAI for analysis df = SmartDataframe(clean_data) df.chat("Which products have trending sales?") # Further processing final_data = process_data(df)PandasAI's conversational interface can power the interactive analysis stage in ETL pipelines.Use Cases Across IndustriesThanks to its versatile conversational interface, PandasAI can adapt to workflows across multiple industries. Here are a few examples:Sales Analytics - Analyze sales numbers, find growth opportunities, and predict future performance.df.chat("How do sales for women's footwear compare to last summer?")Financial Analysis - Conduct investment research, portfolio optimization, and risk analysis.df.chat("Which stocks have the highest expected returns given acceptable risk?")Scientific Research - Explore and analyze the results of experiments and simulations.df.chat("Compare the effects of the three drug doses on tumor size.")Marketing Analytics - Measure campaign effectiveness, analyze customer journeys, and optimize spending.df.chat("Which marketing channels give the highest ROI for millennial customers?")And many more! PandasAI fits into any field that leverages data analysis, unlocking the power of conversational analytics for all.ConclusionThis guide covered a comprehensive overview of PandasAI's capabilities for effortless conversational data analysis. We walked through:● Installation and configuration● Asking questions in plain English● Connecting to databases, cloud storage, APIs● Customizing NLP and visualization● Integration into production pipelinesPandasAI makes data analysis intuitive and accessible to all. By providing a natural language interface, it opens up insights from data to a broad range of users.Start adding a conversational layer to your workflows with PandasAI today! Democratize data science and transform how your business extracts value from data through the power of AI.Author BioGabriele Venturi is a software engineer and entrepreneur who started coding at the young age of 12. Since then, he has launched several projects across gaming, travel, finance, and other spaces - contributing his technical skills to various startups across Europe over the past decade.Gabriele's true passion lies in leveraging AI advancements to simplify data analysis. This mission led him to create PandasAI, released open source in April 2023. PandasAI integrates large language models into the popular Python data analysis library Pandas. This enables an intuitive conversational interface for exploring data through natural language queries.By open-sourcing PandasAI, Gabriele aims to share the power of AI with the community and push boundaries in conversational data analytics. He actively contributes as an open-source developer dedicated to advancing what's possible with generative AI.
Read more
  • 0
  • 0
  • 13712

article-image-using-llamaindex-for-ai-assisted-knowledge-management
Andrei Gheorghiu
08 Jun 2023
10 min read
Save for later

Using LlamaIndex for AI-Assisted Knowledge Management

Andrei Gheorghiu
08 Jun 2023
10 min read
IntroductionOne of the hottest questions of the moment for a lot of strategy and decision makers across most industries worldwide is:How can AI help my business?Afterall, with great disruption also comes great opportunity. A sound business strategy should not ignore emerging changes in the market. We’re still at the early stages of understanding AI and I’m not going to provide a definitive answer to this question in this article, but the good news is that this article should provide a part of the answer.Knowledge is power, right?And yet we all know how it is to struggle trying to retain and efficiently re-use the knowledge we gather.We strive to learn from our successes and mistakes and we invest a lot of time and money in building fancy knowledge bases just to discover later that unfortunately we keep repeating the same mistakes and reinventing the wheel.In my experience as a consultant, the biggest issue (especially for medium and large companies) is not a lack of knowledge but on the contrary, it’s too much knowledge and an inability to use it in a timely and effective manner. The solutionThis article presents a very simple yet effective way of indexing large quantities of pre-existing knowledge that can later be retrieved by natural language queries or integrated with chatbot systems.As usual, take it as a starting point. The code example is trivial and lacks any error handling, but provides the building blocks to work from.My example builds on your existing knowledge base and leverages LlamaIndex and the power of Large Language Models (in this case GPT 3.5 Turbo from OpenAI).Why LlamaIndex? Well, created by Jerry Liu, LlamaIndex is a robust open-source resource that empowers you to organize and search your data for a variety of applications, including answering questions, summarizing information or serving as a part of a chatbot system. It provides data connectors to ingest your existing data sources in many different formats (such as text files, PDF, docs, SQL, etc.). It then allows you to structure your data (via indices or graphs) so that this data can be easily used with LLMs. In many ways, it is similar to Langchain but more focused on data storage and retrieval instead of automated AI agents.In short, this article will show you how, with just a few lines of code, you can index your enterprise's knowledge base and then have the ability to query and retrieve information from GPT 3.5 Turbo with your own knowledge base on top of that in the most natural way: plain English. Logic diagramCreating the IndexRetrieving the knowledge PrerequisitesMake sure you check these points before you start writing the code:Make sure you store your OpenAI API key in a local environment variable for secure and efficient access. The code works on the assumption that the API key is stored on your local environment (OPENAI_API_KEY).I’ve used Python v3.11. If you’re running an older version, an update is recommended to make sure you don’t run into any compatibility issues.Install the requirements:pip install openaipip install llama-index Create a subfolder in your .PY file’s location (in my example the name of the subfolder is ‘stories’). You will store your knowledge base in .TXT files in that location. If your knowledge articles are in different formats (e.g., PDF or DOCX) you will have to:Change the code to use a different LlamaIndex data connector (https://gpt-index.readthedocs.io/en/latest/reference/readers.html) – this is my recommended solution – or:Convert all your documents in .TXT format and use the code as it is.For my demo, I have created (with the help of GPT-4) three fictional stories that will represent our proprietary ‘knowledge’ base: Your ‘stories’ folder should now look like this: The CodeFor the sake of simplicity, I’ve split the functionality into two different scripts:Index_stories.py (responsible for reading the ‘stories’ folder, creating an index and saving it for later queries)Query_stories.py (demonstrating how to query GPT 3.5 and then filter the AI response through our own knowledge base)Let’s begin with Index_stories.py:from llama_index import GPTVectorStoreIndex, SimpleDirectoryReader # Loading from a directory documents = SimpleDirectoryReader('stories').load_data() # Construct a vector store index index = GPTVectorStoreIndex.from_documents(documents) # Save your index to a .json file index.storage_context.persist()As you can see, the code is using SimpleDirectoryReader from LlamaIndex to read all .TXT files from the ‘stories’ folder. It then creates a simple vector index that can later be used to run queries over the content of these documents.In case you’re wondering what a vector index represents, imagine you're in a library with thousands of books, and you're looking for a specific book. Instead of having to go through each book one by one, this index acts in a similar way to a library catalog.  It helps you find the book you're looking for quickly.In the context of this code, GPTVectorStoreIndex is like that library catalog. It's a tool that helps organize and find specific pieces of information (like documents or stories) quickly and efficiently. When you ask a question, it looks through all the information it has and finds the most relevant answer for you. It's like a super-efficient librarian that knows exactly where everything is.The last line of the code saves the index in a sub-folder called ‘storage’ so that we do not have to recreate it every time and we are able to reuse it in the future. Now, for the querying part. Here’s the second script: Query_stories.py:from llama_index import GPTVectorStoreIndex, StorageContext, load_index_from_storage import openai import os openai.api_key = os.getenv('OPENAI_API_KEY') def prompt_chatGPT(task): response = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": task} ] ) AI_response = response['choices'][0]['message']['content'].strip() return AI_response # rebuild storage context storage_context = StorageContext.from_defaults(persist_dir="storage") # load index index = load_index_from_storage(storage_context) # Querying GPT 3.5 Turbo prompt = "Tell me how Tortellini Macaroni's brother managed to conquer Rome." answer = prompt_chatGPT(prompt) print('Original AI answer: ' + answer +'\n\n') # Refining the answer in the context of our knowledge base query_engine = index.as_query_engine() response = query_engine.query(f'The answer to the following prompt: "{prompt}" is :"answer". If the answer is aligned to our knowledge, return the answer. Otherwise return a corrected answer') print('Custom knowledge answer: ' + str(response))How it worksAfter indexing the ‘stories’ folder, once you run the query_stories.py script the code will first load the index from the ‘storage’ sub-folder. It then prompts the GPT 3.5 Turbo model with a hard-coded question: “Tell me how Tortellini Macaroni’s brother managed to conquer Rome”. After the response is received, it queries our ‘stories’ to see if the answer aligns with our ‘knowledge’. Then, you’ll receive two answers.The first one is the original answer from GPT 3.5 Turbo:As expected, the AI model identified Mr. Spaghetti as a potentially fictional character and could not find any historical references of him conquering Rome.The second answer though, checks with our ‘knowledge’ and, because we have different information in our ‘stories’, it modifies the answer into:If you’ve read the three GPT-4-created stories you’ve noticed that Story1.txt mentions Biscotti as a fictional conqueror of Rome but not his brother and Story2.txt mentions Tortellini and his farm adventures but does not mention any relationship with Biscotti. Only the third story (Story3.txt) describes the nature of their relationship.This shows not only that the vector index managed to correctly record the knowledge from the individual stories but also proves the query function managed to provide a contextual response to our question.In addition to the Vector Store Index, there are several other types of indexes that can be used depending on the specific needs of your project.For instance, the List Index simply stores Nodes as a sequential chain, making it a straightforward and efficient choice for certain applications, such as where the order of data matters and where you frequently need to access all the data in the order it was added. An example might be a timeline of events or a log of transactions, where you often want to retrieve all entries in chronological order.Another option is the Tree Index which builds a hierarchical tree from a set of Nodes, which can be particularly useful when dealing with complex, nested data. For instance, if you're building a file system explorer, a tree index would be a good choice because files and directories naturally form a tree-like structure.There’s also the Keyword Table Index, which extracts keywords from each Node and builds a mapping from each keyword to the corresponding Nodes. This can be a powerful tool for text-based queries, allowing for quick and precise retrieval of relevant information.Each of these indexes offers unique advantages, and the choice between them would depend on the nature of your data and the specific requirements of your use case. ConclusionNow, think about the possibilities. Instead of fictional stories, we could have a collection of our standard operating procedures, process descriptions, knowledge articles, disaster recovery plans, change schedules and so on and so forth. Or, as another example we could build a chatbot that can solve generic users requests simply using GPT3.5 knowledge but forwards more specific issues (indexed from our knowledge base) to a support team.This brings unlimited potential in automation of our business processes and improvement of the decision-making process. You get the best of both worlds: the power of Large Language Models combined with the value of your own knowledge base. Security considerationsWorking on this article made me realize that we cannot really trust our interactions with an AI model unless we are in full control of the entire technology stack. Just because the interface might look familiar, it doesn’t necessary mean that bad actors cannot compromise the integrity of the data by injecting false or censored responses to our queries. But that’s a story for another time! Final noteThis article barely scratches the surface of the full capabilities of LlamaIndex. It is not meant to be a comprehensive guide into this topic but rather serve as an example start point for integrating AI technologies in our day-to-day business processes. I encourage you to have an in-depth study of LlamaIndex’s capabilities (https://gpt-index.readthedocs.io/en/latest/) if you want to take advantage of its full capabilities.About the AuthorAndrei Gheorghiu is an experienced trainer with a passion for helping learners achieve their maximum potential. He always strives to bring a high level of expertise and empathy to his teaching.With a background in IT audit, information security, and IT service management, Andrei has delivered training to over 10,000 students across different industries and countries. He is also a Certified Information Systems Security Professional and Certified Information Systems Auditor, with a keen interest in digital domains like Security Management and Artificial Intelligence.In his free time, Andrei enjoys trail running, photography, video editing and exploring the latest developments in technology.You can connect with Andrei on:LinkedIn: https://www.linkedin.com/in/gheorghiu/Twitter: https://twitter.com/aqg8017 
Read more
  • 0
  • 0
  • 13519

article-image-getting-started-with-microsoft-fabric
Arshad Ali, Bradley Schacht
11 Sep 2023
7 min read
Save for later

Getting Started with Microsoft Fabric

Arshad Ali, Bradley Schacht
11 Sep 2023
7 min read
This article is an excerpt from the book, Learn Microsoft Fabric, by Arshad Ali and Bradley Schacht. A step-by-step guide to harness the power of Microsoft Fabric in developing data analytics solutions for various use cases.IntroductionIn this article, you will learn about enabling Microsoft Fabric in an existing Power BI tenant or create a new one Fabric tenant if you don’t have one already. Next, you will create your first Fabric workspace, which you will use to carry out all subsequent chapters' exercises.  Enabling Microsoft Fabric Microsoft Fabric shares the same Power BI tenant. If you have a Power BI or Microsoft Fabric tenant already created, you have these two options, mentioned next, to enable Fabric (more at https://learn.microsoft.com/en-us/fabric/admin/fabric-switch) in that tenant. For each of these options, depending on the configuration you select, Microsoft Fabric becomes available for either everyone in the tenant, or to a selected group of users. Note: If you are new to Power BI or your organization doesn't have a Power BI/Fabric tenant (https://aka.ms/try-fabric) yet, you can set it up and use a Fabric trial by visiting to sign up for a Power BI free license. Afterward, you can start the Fabric trial, as mentioned later in this section while discussing trial capacity. Fabric trial includes access to the Fabric product experiences and the resources to create and host Fabric items. As of this writing, the Fabric Trial license allows you to work with Fabric for 60 days free. At that point, you will need to provision Fabric Capacity to continue using Microsoft Fabric.Enable Fabric at the tenant level: If you have admin privileges, you can access the Admin center from the Settings menu in the upper right corner of the Power BI service. From here, you enable Fabric on the Tenant settings page. When you enable Microsoft Fabric using the tenant setting, users can create Fabric items in that tenant. For that, navigate to the Tenant settings page in the Admin portal page of the tenant, expand the Users can create Fabric items field and toggle the switch to enable or disable it, and then hit Apply. Figure 2.1 – Microsoft Fabric - tenant settings Enable Fabric at the capacity level: While it is recommended to enable Microsoft Fabric for the entire organization at the tenant level, there are times when you would like it to be enabled for a certain group of people at the capacity level. For that, on the Tenant Admin portal, please navigate to the Capacity settings page, identify and select the capacity for which you want Microsoft Fabric to be enabled, and then click on the Delegate tenant settings tab at the top. Then under the Microsoft Fabric section of this page, expand the Users can create Fabric items setting and toggle the switch to enable or disable it, and then hit Apply. Figure 2.2 – Microsoft Fabric - capacity settings In both these above scenarios, it assumes you have paid capacity already available. However, if you don’t have it yet, you can use Fabric Trial (more at https://learn.microsoft.com/en-us/fabric/get-started/fabric-trial) for creating Fabric items for a certain duration if you want to learn or test the functionalities of Microsoft Fabric. For that, open the Fabric homepage (by visiting https://app.fabric.microsoft.com/home) and select Account Manager. In the Account Manager, click on Start Trial and follow the wizard instructions to enable Fabric trial with trial capacity. Note: For you to learn and try out different capabilities in Fabric, Microsoft provides free trial capacity. With this trial capacity, you get full access to all the Fabric workloads or, its features, including the ability to create Fabric items, and collaborate with others as well OneLake storage up to 1 TB. However, the usage of trial capacity is intended for trial and testing only and not for production usage        Checking your access to Microsoft Fabric To validate if Fabric is enabled and you have access to it in your organization's tenant, sign in to Power BI and look for the Power BI icon at the bottom left of the screen. If you see the Power BI icon, select to see the experiences available within Fabric. Figure 2.3 – Microsoft Fabric - workloads switcher If the icon is present, you can click Microsoft Fabric link at the top the screen (as shown in Figure 2.3) to switch to Fabric experience or click on individual experience you want to switch to. Figure 2.4 – Microsoft Fabric - home page However, if the icon isn't present, Fabric is not available to you. In that case, please follow the steps (or work with your Power BI or Fabric admin) mentioned in the previous section to enable it. Creating your first Fabric enabled workspace Once you have confirmed that Fabric is enabled in your tenant and you have access to it, the next step is to create your Fabric workspace. You can think of a Fabric workspace as a logical container that will contain all the items such as lakehouses, warehouse, notebooks, and pipelines. Follow these steps to create your first Fabric workspace: 1. Sign in to Power BI (https://app.powerbi.com/). 2. Select Workspaces | + New workspace. Figure 2.5 – Create a new workspace 3.   Fill out the Create a workspace form as follows: o   Name: Enter Learn Microsoft Fabric and some characters for uniqueness o   Description: Optionally, enter a description for the workspace  Figure 2.6 – Create new workspace - details o    Advanced: Select Fabric capacity under License mode and then choose a capacity you have access to. If not, you can start a trial license, as described earlier, and use it here. 4.    Select Apply. The workspace will be created and opened. 5.    You can click on Workspaces again and then search for your workspace by typing its name in the search box. You can also pin the selected workspace so that it always appears at the top.  Figure 2.7 – Search for a workspace 6. Clicking on the name of the workspace will open the workspace and its link will be available in the left side navigation bar, allowing you to switch from one item to others quickly. Currently, since we haven’t created anything yet, there is nothing here. You can click on +New to start creating Fabric items.  Figure 2.8 – Switch to a workspace With a Microsoft Fabric workspace set up, let’s review the different workloads available.ConclusionIn this article, we covered the basics of Microsoft Fabric in Power BI. You can enable Fabric at the tenant or capacity level, with a trial option available for newcomers. To check your access, look for the Power BI icon. If present, you're ready to use Fabric; if not, follow the setup steps. Create a Fabric workspace to manage items like lakehouses and pipelines. This article offers a quick guide to kickstart your journey with Microsoft Fabric in Power BI.Author BioArshad Ali is a Principal Program Manager on the Microsoft Fabric product team based in Redmond, WA. As part of his role at Microsoft, he works with strategic customers, partners, and ISVs to help them adopt Microsoft Fabric in solving their complex data analytics problems and driving business insights as well as helps shape the future of the Microsoft Fabric.Bradley Schacht is a Principal Program Manager on the Microsoft Fabric product team based in Jacksonville, FL. As part of his role at Microsoft, Bradley works directly with customers to solve some of their most complex data warehousing problems and helps shape the future of the Microsoft Fabric cloud service.
Read more
  • 0
  • 0
  • 12801

article-image-animating-adobe-firefly-content-with-adobe-animate
Joseph Labrecque
14 Jun 2023
10 min read
Save for later

Animating Adobe Firefly Content with Adobe Animate

Joseph Labrecque
14 Jun 2023
10 min read
You can download the image source files for this article from hereAn Introduction to Adobe FireflyEarlier this year, Adobe unveiled its set of generative AI tools via https://firefly.adobe.com/. These services are similar to other generative image AIs such as Midjourney or Dalle-2 and are exposed through a web-based interface with a well-thought-out UI. Eventually, Adobe plans to integrate these procedures into existing software such as Photoshop, Premiere Pro, Express, and more.Originally as a waitlist that one would need to sign up for… Firefly is now open to anyone for use with an Adobe ID.Image 1: Adobe Firely  Firefly is the new family of creative generative AI models coming to Adobe products, focusing initially on image and text effect generation. Firefly will offer new ways to ideate, create, and communicate while significantly improving creative workflows. Firefly is the natural extension of the technology Adobe has produced over the past 40 years, driven by the belief that people should be empowered to bring their ideas into the world precisely as they imagine them. -- AdobeA few of the things that make Adobe Firefly different from similar generative AI services are the immediate approach to moving beyond prompt-based image generation to procedures such as prompt-driven text effects and vector recolorization… the fact that the Firefly model is based upon Adobe Stock content instead of general web content… and Adobe’s commitment to the Content Authenticity Initiative.As of early June 2023, the following procedures are available through Adobe Firefly: Text to Image: Generate images from a detailed text description.Generative Fill: Use a brush to remove objects, or paint in new ones from text descriptions.Text Effects: Apply styles or textures to text with a text prompt.Generative Recolor: Generate color variations of your vector artwork from a detailed text description.In addition to this, the public Adobe Photoshop beta also has deep integration with Firefly through the generative fill and other procedures exposed through time-tested Photoshop workflows.Generating Images with Text PromptsLet’s have a look at how the prompt-based image generation in Adobe Firefly works. From the Firefly start page, choose the option to generate content using Text to Image and type in the following prompt:cute black kitten mask with glowing green eyes on a neutral pastel backgroundOn the image generation screen, select None as the content type. You should see similar results as the screenshot pictures below.Image 2: Cute black kitten with green eyesFirefly generates a set of four variations based on the given text prompt. You can direct the AI to adjust certain aspects of the images that are generated by tweaking the prompt, specifying certain styles via the user interface, and making choices around aspect ratio and other aspects.Using Generative FillWe can enter the generative fill interface directly following the creation of our image assets by hovering over the chosen variant and clicking the Generative Fill option.Image 3: Generative fill outputThe generative fill UI contains tools for inserting and removing content from an existing image with the direction of text prompts as well. In the example below, we are brushing over the kitten’s forehead and specifying that we would like the AI to generate a crown in that place via a text prompt:Image 4: Generating a crown using a text promptClicking the Generate button will create a set of variants - similar to when we first created the image. At this point, we can choose the variant we like best and click the Download button to download the image to our local machine for further manipulation.Image 5: Image generated with a crown addedSee the file Firefly Inpaint 20230531190125.png to view the resultant image generated by Firefly with the crown added. You can download the files from here.Preparing for Animation with PhotoshopBefore bringing the generated kitten image into Animate, we’ll want to optionally remove the background using Photoshop. With the new contextual toolbar – Adobe makes this very simple. Simply open the generated image file in Photoshop and with the single available layer selected, choose Remove Background from the contextual toolbar that appears beneath the image.Image 6: Removing background in PhotoshopThe background is removed via a layer mask that you can then use to refine the portions of your image that have been hidden through the normal procedure of brushing black or white across the mask to hide or reveal content.See the file Kitten.png to view the image with the background removed by Photoshop.To export your image without the background, choose File… Export… Export As… from the application menu - and select to export a transparent .png file to a location of your local machine that you’ll remember.Animate Document SetupWe’ve prepared a new Animate document using the resulting .png file as a base. The stage measures 1024x1024 and has been given an appropriate background color. The FPS (frames per second) value is set to 30 – making every 30 frames in the timeline defines 1 second of motion. The kitten image has been added to the single layer in this document and is present on the stage:See the file KittenStarter.fla from here to if you’d like to follow along in the next section.Producing Motion from Generative Images with Adobe AnimateWhile Adobe Animate does not have any direct interaction with Firefly procedures yet – it has exhibited Adobe Sensei capabilities for a number of years now through automated lip-sync workflows that make use of the Sensei AI to match visemes with language sounds (phonemes) across the timeline for automated frame assignments. Since we are dealing with a kitten image and not a speaking character – we will not be using this technology for this article. 1. Instead, we’ll be using another new animation workflow available in Animate through the Asset Warp tool and the creation of Warped Objects. Select the Asset Warp tool from the toolbar. This tool allows us to transform images into Warped Objects and to place and manipulate pins to create a warp mesh.2. In the Properties panel, ensure in the Tool Properties tab that Creates Bones is toggled off within the Warp Options section. This will ensure we create pins and not bones as would be needed for a full rig.Image 7: Properties Panel 3. Using the Asset Warp tool, click the tip of the kitten’s right ear to establish a pin and transform the image to a Warped Object. An overlay mesh is displayed across the object – indicating the form and complexity of the newly created warped object.Image 8: Warped Object4. Add additional pins across the image to establish a set of control pins. I recommend moving clockwise around the kitten’s face and have placed a total of 8 pins in locations where I may want to control the mesh.Image 8: Kitten face clock-wise 5. Now, look at the timeline and note that we currently have a single keyframe at frame 1. It is good practice to establish your Warped Object and its associated pins first, before creating additional keyframes… but changes can be propagated across frames if necessary. Select frame 30 and insert a new keyframe from the menu above the timeline to duplicate the keyframe at frame 1. You’ll then have 2 identical keyframes in the timeline with a span of frames between them.6. Add a third keyframe at frame 15. This is the frame in which we will manipulate the Warped Object. The keyframes at frames 1 and 30 will remain as they are.7. Using the Asset Warp toll once again, click and drag the various pins at frame 15 to distort the mesh and change the underlying image in subtle ways. I pull the ears closer together and move the chin down and adjust the crown and cheeks very slightly. 8. We’ll now establish animation between these three keyframes with a Classic Tween. In the timeline, select a series of frames across both frames spans so they are highlighted in blue. From the men above the timeline, choose to Create classic tween.9. The frame spans take on a violet color which indicates a set of Classic Tweens have been established. The arrows across each tween indicate there are no technical errors. Scrub the blue playhead across the timeline to view the animation you’ve created from our Firefly content. Your Adobe Firefly content is now fully animated!Refining Warped Object MotionIf desired, you can refine the motion by inserting easing effects such as a Cubic Ease In Out across your tweens by selecting the frames of a tween and looking at the Frame tab in the Properties panel. In the Tweening section, simply click the Effect button to view your easing presets and double-click the one you wish to apply.See the file KittenComplete.fla to view the finished animation.Sharing Your WorkIt’s easy to share your animated Firefly creation through the menu at the upper right of the screen in Animate. Simply click the Quick Share and publish icon to begin.Choose the Publish option… and then Animated GIF (.gif) – and click the Publish button to generate the file.You will be able to locate the published .gif file in the same folder as your .fla document.See the file Kitten.gif to view the generated animated GIF.Author BioJoseph is a Teaching Assistant Professor, Instructor of Technology, University of Colorado Boulder / Adobe Education Leader / Partner by DesignJoseph Labrecque is a creative developer, designer, and educator with nearly two decades of experience creating expressive web, desktop, and mobile solutions. He joined the University of Colorado Boulder College of Media, Communication, and Information as faculty with the Department of Advertising, Public Relations, and Media Design in Autumn 2019. His teaching focuses on creative software, digital workflows, user interaction, and design principles and concepts. Before joining the faculty at CU Boulder, he was associated with the University of Denver as adjunct faculty and as a senior interactive software engineer, user interface developer, and digital media designer.Labrecque has authored a number of books and video course publications on design and development technologies, tools, and concepts through publishers which include LinkedIn Learning (Lynda.com), Peachpit Press, and Adobe. He has spoken at large design and technology conferences such as Adobe MAX and for a variety of smaller creative communities. He is also the founder of Fractured Vision Media, LLC; a digital media production studio and distribution vehicle for a variety of creative works.Joseph is an Adobe Education Leader and member of Adobe Partners by Design. He holds a bachelor’s degree in communication from Worcester State University and a master’s degree in digital media studies from the University of Denver.Author of the book: Mastering Adobe Animate 2023
Read more
  • 0
  • 0
  • 12555

article-image-duet-ai-for-google-workspace
Aryan Irani
22 Sep 2023
6 min read
Save for later

Duet AI for Google Workspace

Aryan Irani
22 Sep 2023
6 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionDuet AI was announced at Google Cloud Next 23 as a powerful AI collaborator that can help you get more done in Google Workspace. It can help you write better emails, sort tables, create presentations, and more. Duet AI is still under development, but it has already learned to perform many kinds of tasks, including:Helping you write better in Google Docs.Generate images for better presentations in Google SlidesOrganizing and analyzing data in Google SheetsThere is so much more that Duet AI provides and Google will be announcing more updates to it. In this blog post, we will be taking a look at these features that Duet AI provides in detail with some interesting examples.Help me write in Google DocsThe help me write feature in Google Docs helps you to write better content, faster. It can help you generate new text, rewrite existing content or even improve your writing style.Generate new text: You can use the Help Me Write feature to generate new text for your document, such as a blog post, social media campaign and more. All you have to do is type in a prompt and it will generate text for you according to your instructions.Rewrite Existing text: You can use the help me write feature to rewrite existing text in the document. For example, you can use it to make your writing more concise, more formal, and creative.Improve your writing style: This allows you to improve your writing style by suggesting edits and improvements you should make. It can even tell you to correct your grammar, improve your sentence structure, and make your writing more engaging.Now that we have understood what the capabilities of the Help Me Write feature in Google Docs is, let's take a look at it in action.On opening the new Google Doc, you can see the Help Me Write feature pops up.On clicking the button, it allows you to enter a prompt that you want. For this example, we are going to tell it to write an advertisement for men’s soap bars.On structuring the prompt, to generate the text just go ahead and click on Create. In just a few seconds you will be able to see that Duet AI has generated a complete new advertisement.Here you can see we have successfully generated an advertisement for the soap bars. On reviewing the advertisement, let’s say you do not like the advertisement and maybe want to refine it and change the tone of it. You can do that by clicking on Refine.On clicking Refine, you will be allowed to choose from a variety of options on how you want to refine the paragraph Duet AI just generated for you. Additionally, you can manually design another prompt for how you want to refine the paragraph by typing it in the custom section.For this example, we are going to move forward and change the tone of the advertisement to Casual.On refining the paragraph, just in a few seconds, we can see that it has given me a new informal version of it. Once you like the paragraph Duet AI has generated for you, go ahead and click on insert, the paragraph will be inserted inside your Google Doc.Here you can see the paragraph has been pasted in the Google Doc and we have now successfully generated a new advertisement using Duet AI.Generate Images in SlidesThere have been so many times I have spent time trying to find the right photo to fit my slide and have been unsuccessful. With the new feature that Duet AI provides for Google Slides, I can generate images inside of slides and integrate them at the click of a button.Now that we have understood what the capabilities of this feature are, let’s take a look at it in action.When you open up your Google Slides, you will see something like this called Help me visualize. Once you click on this a new sidebar will open up on the right side of the screen.In this sidebar, you have to enter the prompt for the image you want to generate. Once you enter the prompt you have an option to select a style for the image.Once you select the style of the image, go ahead and click on Create.On clicking Create, in about 15–20 seconds you will see multiple photos generated according to the prompt we entered.Here you can see on successful execution we have been able to generate images inside of your Google Slides.Organizing and analyzing data in Google SheetsWe looked at how we can generate new images in Google Slides followed by the Help Me Write feature in Google Docs. All these features helped us understand the power of Duet AI inside of Google Workspace Tools.The next feature that we will be taking a look at is inside of Google Sheets, which allows us to turn ideas into actions and data into insights.Once you open up your Google Sheet, you will see a sidebar on the right side of the screen saying help me organize.Once you have your Google Sheet ready and the sidebar ready, it's time to enter a prompt for which you want to create a custom template. For this example, I am going to ask it to generate a template for the following prompt. On clicking create, in a few seconds you will see that it has generated some data inside of your Google Sheet.On successful execution, it has generated data according to the prompt we designed. If you are comfortable with this template it has generated go ahead and click on insert.On clicking Insert, the data will be inserted into the Google Sheet and you can start using it like a normal Google Sheet.ConclusionCurrently, all these features are not available for everybody and it is on a waitlist. If you want to grab the power of AI inside of Google Workspace Tools like Google Sheets, Google Docs, Google Slides and more, apply for the waitlist by clicking here.In this blog, we looked at how we can use AI inside of our Google Docs to help us write better. Later, we looked at how we can generate images inside of our Google Slides to make our presentations more engaging, and in the end, we looked at how we can generate templates inside of Google Sheets. I hope you have understood how to get the basics done with Duet AI for Google Workspace.Feel free to reach out if you have any issues/feedback at aryanirani123@gmail.com.Author BioAryan Irani is a Google Developer Expert for Google Workspace. He is a writer and content creator who has been working in the Google Workspace domain for three years. He has extensive experience in the area, having published 100 technical articles on Google Apps Script, Google Workspace Tools, and Google APIs.Website
Read more
  • 0
  • 0
  • 12359

article-image-ai-powered-stock-selection
Julian Melanson
12 Jul 2023
4 min read
Save for later

AI-Powered Stock Selection

Julian Melanson
12 Jul 2023
4 min read
Artificial Intelligence continues to infiltrate every facet of modern life, from daily chores to complex decision-making procedures, including the stock market. The recent advent of AI-powered language models like ChatGPT by OpenAI serves as a notable testament to this statement. The potential of these models transcends conversational prowess, delving into the ability to guide investment decisions.A case in point is an experiment conducted by Finder.com, an international financial comparison site. The test pitted an AI-constructed portfolio against some of the most renowned investment funds in the United Kingdom, seeing the AI-curated selection outstrip its counterparts. The portfolio, an assortment of 38 stocks picked by ChatGPT, manifested a gain of 4.9% between March 6 and April 28. In comparison, ten top-tier investment funds noted an average decline of 0.8% in the same period. To put this into perspective, the S&P 500 index, an esteemed gauge of the American market, marked a rise of 3%, and the Stoxx Europe 600, its European equivalent, noted a modest increase of 0.5%.The experiment's dynamics are as intriguing as its outcome. Investment funds aggregate capital from a multitude of investors, a fund manager administering the investment decisions. However, Finder's analysts asked the AI chatbot to construct a stock portfolio based on prevalent selection criteria - low indebtedness and a solid growth trajectory. Noteworthy picks included industry behemoths like Microsoft, Netflix, and Walmart.This process's ingenuity lies in its accessibility. While AI has pervaded major funds for years, supplementing investment decisions, the advent of ChatGPT has democratized this expertise. Now, the public can use this technology, thereby revolutionizing retail investment.How dependable are these AI-driven stock predictions? A study by the University of Florida supplies an answer. Published in April, the study posits that ChatGPT could forecast specific companies' stock price movements more accurately than some fundamental analysis models.In fact, the democratization of AI, characterized by models like ChatGPT and BERT, could potentially upend the financial industry. Researchers across the globe have corroborated this sentiment. In two separate studies, researchers found that large language models (LLMs) can enhance stock market and public opinion predictions, evidenced by historical data.University of Florida professors Alejandro Lopez-Lira and Yuehua Tang further validated this argument in their paper "Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models". They utilized ChatGPT to assess news headlines' sentiment, a metric that has become indispensable for quantitative analysis algorithms employed by stock traders.Sentiment analysis discerns whether a text, such as a news headline, conveys a positive, neutral, or negative sentiment about a subject or company. This evaluation enhances the accuracy of market predictions.Lopez-Lira and Tang applied ChatGPT to gauge the sentiment manifested in news headlines. Upon comparing ChatGPT's assessment of these news stories with the subsequent performance of company shares in their sample, they discovered statistically significant predictions, a feat unachieved by other LLMs.The professors asserted, "Our analysis reveals that ChatGPT sentiment scores exhibit a statistically significant predictive power on daily stock market returns." This statement, substantiated by their findings, shows a strong correlation between the ChatGPT evaluation and the subsequent daily returns of the stocks in their sample. It underscores the potential of ChatGPT as a potent tool for predicting stock market movements based on sentiment analysis.Author BioJulian Melanson is one of the founders of Leap Year Learning. Leap Year Learning is a cutting-edge online school that specializes in teaching creative disciplines and integrating AI tools. We believe that creativity and AI are the keys to a successful future and our courses help equip students with the skills they need to succeed in a continuously evolving world. Our seasoned instructors bring real-world experience to the virtual classroom and our interactive lessons help students reinforce their learning with hands-on activities.No matter your background, from beginners to experts, hobbyists to professionals, Leap Year Learning is here to bring in the future of creativity, productivity, and learning!
Read more
  • 0
  • 0
  • 12331
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-databricks-dolly-for-future-ai-adoption
Sagar Lad
29 Dec 2023
6 min read
Save for later

Databricks Dolly for Future AI Adoption

Sagar Lad
29 Dec 2023
6 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionArtificial intelligence is playing an increasingly crucial role in assisting businesses and organizations to process huge volumes of data that the world is producing. The development of huge language models to evaluate enormous amounts of text data is one of the largest challenges in AI research. Databricks Dolly revolutionized the Databricks project, opening the door for more complex NLP models and improving the field of AI technology.Databricks Dolly for AIBefore we deep dive into Databricks Dolly and its impact on the future of AI adoption, let’s understand the basics of Large Language Models and their current challenges.Large Language Models & Databricks DollyAn artificial intelligence system called a large language model is used to produce human-like language and comprehend natural language processing activities. These models are created using deep learning methods and are trained on a lot of text input using a neural network design. Its major objective is to produce meaningful and coherent text from a given prompt or input. There are many uses for this, including speech recognition, chatbots, language translation, etc.They have gained significant popularity because of below capabilities :Text GenerationLanguage TranslationClassification and Categorization Conversational AIRecently ChapGPT from OpenAI, Google Bard, and Bing have created unified models for training and fine-tuning such models at a large scale. Now the issue with these LLMs is that they save user data on external servers, opening the cloud to unauthorized users and increasing the risk of sensitive data being compromised. Additionally, They may provide irrelevant information that could potentially injure users and lead to poor judgments, as well as offensive, discriminating content against certain individuals.In order to overcome this challenge, there is a need for open-source alternatives that promote the accuracy, and security of Large Language Models. The Databricks team has built Databricks Dolly, an open-source chatbot that adheres to these criteria and performs exceptionally in a variety of use situations, in response to these requirements after carefully examining user issues.Databricks Dolly can produce text by responding to questions, summarising ideas, and other natural language commands. It is built on an open-source, 6-billion-parameter model from EleutherAI that has been modified using the databricks-dolly-15k dataset of user-generated instructions. Due to Dolly's open-source nature and commercial licensing, anyone can use it to build interactive applications without having to pay for API access or divulge their data to outside parties. Dolly may be trained for less than $30, making construction costs low. Data can be saved in the DBFS root or another cloud object storage location that we specify when Dolly generates an answer. Using Dolly, we can design, construct, and personalize LLM without sharing any data.                                                         Image 1 - Databricks Dolly DifferentiatorsDemocratizing the magic of Databricks DollyWith Databricks Dolly , we can manage the below types of engagements.1.  Open & Close ended Question and Answers2.  Information Parsing from web3.  Detailed Answers based on the input4.  Creative Writing Now, Let’s see in detail how we can use Databricks dolly.Step 1 : Install Required LibrariesUse the below command in Databricks notebook or use cmd to install the required packages.%pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2"                                                                                   Image 2 - Databricks Dolly Package InstallationAs you can see from the image, once we execute this command in Databricks, the required packages are installed.Accelerate : Accelerate the training of machine learning modelsTransformers : Collection of pre-trained models for NLP activitiesTorch : To build and train deep learning modelsStep 2 : Input to the Databricks DollyOnce the model is loaded, the next step is to generate text based on the generate_next function.                                                                                                                     Image 3 - Databricks Dolly - Create Pipeline for remote code executionHere, the pipeline function from the Transformers library is used to execute the NLP tasks such as text generation, sentiment analysis, etc. Option trust_remote_code is used for the remote code execution.Step 3 : Pipeline reference to parse the output                                                                   Image 4 -Databricks Dolly - Create a Pipeline for remote code executionNow, the final step is to provide the textual input to the model using the generate_text function to which will use the language model to generate the response.Best Practices of Using Databricks DollyBe specific and lucid in your instructions to DollyUse Databricks Machine Learning Models to train and deploy Dolly for a scalable and faster executionUse the hugging face library and repo which has multiple tutorials and examplesConclusionThis article describes the difficulties that organizations have adopting Large Language Models and how Databricks may overcome these difficulties by utilising Dolly. Dolly gives businesses the ability to create a customized LLM that meets their unique requirements and has the added benefit of having open-source source code. In order to maximize LLM performance, the article also highlights the significance of recommended practices.Author Bio:Sagar Lad is a Cloud Data Solution Architect with a leading organization and has deep expertise in designing and building Enterprise-grade Intelligent Azure Data and Analytics Solutions. He is a published author, content writer, Microsoft Certified Trainer, and C# Corner MVP.Link - Medium , Amazon , LinkedIn
Read more
  • 0
  • 0
  • 12029

article-image-build-enterprise-ai-workflows-with-airops
Julian Melanson
12 Jul 2023
4 min read
Save for later

Build Enterprise AI Workflows with AirOps

Julian Melanson
12 Jul 2023
4 min read
In the realm of Artificial Intelligence, immense potential is no longer an abstract concept but a palpable reality, and businesses are increasingly seizing the opportunities this technology affords. AirOps, a new player in the AI sphere, has emerged as a remarkable conduit for businesses to harness the transformative abilities of AI within their operations. The company has announced a $7 million seed funding round, showing the confidence investors place in its unique proposition.Founded by Alex Halliday, Berna Gonzalez, and Matt Hammel, the company encapsulates a blend of technological knowledge and industry expertise. Their collective backgrounds span a diverse range of sectors, including MasterClass, Bungalow, and more. This multifaceted perspective fuels the vision of AirOps, allowing it to offer dynamic and adaptable solutions tailored to a multitude of business needs.AirOps deploys a platform leveraging large language models (LLMs) such as GPT-3, GPT-4, and Claude, each with its unique capabilities and merits. The AI-driven tools developed by AirOps can be integrated within existing business systems, speeding up processes, revealing deep insights from data, and generating custom content. These services are readily available across various interfaces, including Google Sheets, web apps, data warehouses, or APIs, thereby allowing businesses to embed AI capabilities directly into their established workflows.Airops' Main FeaturesDespite the impressive abilities of LLMs like GPT-4, the challenge for businesses lies in their practical deployment. AirOps mitigates this hurdle, offering a robust platform that enables businesses to use these AI models in addressing their specific needs. The platform helps users automate laborious tasks, generate personalized content, extract valuable insights from data, and leverage natural language processing techniques.One of the salient features of AirOps' value proposition is cost efficiency. Utilizing AI models can often be a costly endeavor, but the AirOps platform presents an innovative solution. The system employs larger models such as GPT-4 for initial training, then switches to smaller, fine-tuned, open-source models for regular operations, significantly reducing the financial burden.As AI evolves, the demand for nuanced and adaptable models increases. AirOps is at the forefront of these developments, continually learning and adapting to offer the most suitable solutions for its customers. AirOps aids businesses in creating AI experiences and generating new content from their existing data corpus, paving the way for a streamlined and efficient approach to making the most of AI capabilities.The company's strategic vision is also worth noting. Initially, AirOps set out to help businesses in extracting value from their data. However, as large language models have gained public recognition, the company has astutely shifted its focus. Today, AirOps aims to facilitate businesses in merging their data with LLMs, leading to the creation of custom workflows and applications.As AI continues to permeate the professional sphere, AirOps is showing how businesses can capitalize on this trend. Their AI-powered tools are being used across a variety of sectors, such as real estate, e-learning, and financial services, among others. By automating complex tasks, streamlining workflows, and generating custom content at scale, AirOps is empowering businesses to harness the transformative capabilities of AI effectively and efficiently.With its recent seed funding, the company aims to expand its product suite, bolster its team, and extend its customer base. As Halliday, the CEO, stated, the company's goal is to enable businesses to bridge the gap between the theoretical prowess of AI and its practical implementation. Through its groundbreaking work, AirOps is ensuring that the AI revolution in the business world is not merely a utopian vision, but an attainable reality.Author BioJulian Melanson is one of the founders of Leap Year Learning. Leap Year Learning is a cutting-edge online school that specializes in teaching creative disciplines and integrating AI tools. We believe that creativity and AI are the keys to a successful future and our courses help equip students with the skills they need to succeed in a continuously evolving world. Our seasoned instructors bring real-world experience to the virtual classroom and our interactive lessons help students reinforce their learning with hands-on activities.No matter your background, from beginners to experts, hobbyists to professionals, Leap Year Learning is here to bring in the future of creativity, productivity, and learning!
Read more
  • 0
  • 0
  • 11898

article-image-getting-started-with-automl
M.T. White
22 Aug 2023
7 min read
Save for later

Getting Started with AutoML

M.T. White
22 Aug 2023
7 min read
IntroductionTools like ChatGPT have been making headlines as of late.  ChatGPT and other LLMs have been transforming the way people study, work, and for the most part, do anything.  However, ChatGPT and other LLMs are for everyday users.  In short, ChatGPT and other similar systems can help engineers and data scientists, but they are not designed to be engineering or analytics tools.  Though ChatGPT and other LLMs are not designed to be machine-learning tools, there is a tool that can assist engineers and data scientists.  Enter the world of AutoML for Azure.  This article is going to explore AutoML and how it can be used by engineers and data scientists to create machine learning models. What is AutoML?AutoML is an Azure tool that builds the optimal model for a given data set. In many senses, AutoML can be thought of as a ChatGPT-like system for engineers.  AutoML is a tool that allows engineers to quickly produce optimal machine-learning models with little to no technical input.  In short, ChatGPT and other similar systems are tools that can answer general questions about anything, but AutoML is specifically designed to produce machine-learning models. How AutoML works?Though AutoML is a tool designed to produce machine learning models it doesn’t actually use AI or machine learning in the process.  The key to AutoML is parallel pipelines.  A pipeline can be thought of as the logic in a machine-learning model.  For example, the pipeline logic will include things such as cleaning data, splitting data, using a model for the system, and so on.When a person utilizes AutoML it will create a series of parallel pipelines with different algorithms and parameters.  When a model “fits” the data the best it will cease, and that pipeline will be chosen.  Essentially, AutoML in Azure is a quick and easy way for engineers to cut out all the skilled and time-consuming development that can easily hinder non-experienced data scientists or engineers.  To demonstrate how AutoML in Azure works let’s build a model using the tool.What do you need to know?Azure’s AutoML takes a little bit of technical knowledge to get up and running, especially if you’re using a custom dataset.  For the most part, you’re going to need to know approximately what type of analysis you’re going to perform.  You’re also going to need to know how to create a dataset.  This may seem like a daunting task but it is relatively easy. SetupTo use AutoML in Azure you’ll need to setup a few things.  The first thing to set up an ML workspace.  This is done by simply logging into Azure and searching for ML like in Figure 1:Figure 1From there, click on Azure Machine Learning and you should be redirected to the following page.  Once on the Azure Machine Learning page click on the Create button and New Workspace:Figure 2Once there, fill out the form, all you need to do is select a resource group and give the workspace a name.  You can use any name you want, but for this tutorial, the name Article 1 will be used.  You’ll be prompted to click create, once you click that button Azure will start to deploy the workspace.  The workspace deployment may take a few minutes to complete.  Once done click Go to resource. Once you click Go to resource click on Launch studio like in Figure 3.Figure 3At this point, the workspace has been generated and we can move to the next step in the process, using AutoML to create a new model.Now, that the workspace has been created, click Launch Studio you should be met with Figure 4.  The page in Figure 4 is Azure Machine Learning Studio. From here you can navigate to AutoML by clicking the link on the left sidebar:Figure 4Once you click the AutoML you should be redirected to the page in Figure 5:Figure 5Once you see something akin to Figure 5 click on the New Automated ML Job button which should redirect you to a screen that prompts you to select a dataset.  This step is one of the more in-depth compared to the rest of the process.  During this step, you will need to select your dataset.  You can opt to use a predefined dataset that Azure provides for test purposes.  However, for a real-world application, you’ll probably want to opt for a custom dataset that was engineered for your task.  Azure will allow you to either use a pre-built dataset or your own.  For this tutorial we’re going to use a custom dataset that is the following:HoursStory Points161315121511134228281830191032114117129251924172315161315121511134228281830191032114117129251924172315161315121511134228281830191032114117129251924172315161315121511134228281830191032114117129251924172315To use this dataset simply copy and paste into a CSV file.  To use it select the data from a file option and follow the wizard.  Note, that for custom datasets you’ll need at least 50 data points. Continue to follow the wizard and give the experiment a name, for example, E1.  You will also have to select a Target Column.  For this tutorial select Story Points.  If you do not already have a compute instance available, click the New button at the bottom and follow the wizard to set one up.  Once that step is complete you should be directed to a page like in Figure 6:Figure 6This is where you select the general type of analysis to be done on the dataset.  For this tutorial select Regression and click the Next button in Figure 6 then click Finish.  This will start the process which will take several minutes to complete.   The whole process can take up to about 20 or so minutes depending on which compute instance you use.  Once done you will be able to see the metrics by clicking on the Models tab.  This will show all the models that were tried out.  From here you can explore the model and the associated statistics. SummaryIn all, Azure’s AutoML is an AI tool that helps engineers quickly produce an optimal model.  Though not the same, this tool can be used by engineers the same way ChatGPT and similar systems can be used by everyday users.  The main drawback to AutoML is that unlike ChatGPT a user will need a rough idea as to what they’re doing.  However, once a person has a rough idea of the basic types of machine-learning analysis they should be able to use this tool to great effect. Author BioM.T. White has been programming since the age of 12. His fascination with robotics flourished when he was a child programming microcontrollers such as Arduino. M.T. currently holds an undergraduate degree in mathematics, and a master's degree in software engineering, and is currently working on an MBA in IT project management. M.T. is currently working as a software developer for a major US defense contractor and is an adjunct CIS instructor at ECPI University. His background mostly stems from the automation industry where he programmed PLCs and HMIs for many different types of applications. M.T. has programmed many different brands of PLCs over the years and has developed HMIs using many different tools.Author of the book: Mastering PLC Programming 
Read more
  • 0
  • 0
  • 11869

article-image-pinecone-101-anything-you-need-to-know
Louis Owen
16 Oct 2023
10 min read
Save for later

Pinecone 101: Anything You Need to Know

Louis Owen
16 Oct 2023
10 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionThe ability to harness vast amounts of data is essential to build an AI-based application. Whether you're building a search engine, creating tabular question-answering systems, or developing a frequently asked questions (FAQ) search, you need efficient tools to manage and retrieve information. Vector databases have emerged as an invaluable resource for these tasks. In this article, we'll delve into the world of vector databases, focusing on Pinecone, a cloud-based option with high performance. We'll also discuss other noteworthy choices like Milvus and Qdrant. So, buckle up as we explore the world of vector databases and their applications.Vector databases serve two primary purposes: facilitating semantic search and acting as a long-term memory for Large Language Models (LLMs). Semantic search is widely employed across various applications, including search engines, tabular question answering, and FAQ-based question answering. This search method relies on embedding-based similarity search. The core idea is to find the closest embedding that represents a query from a source document. Essentially, it's about matching the question to the answer by comparing their embeddings.Vector databases are pivotal in the context of LLMs, particularly in architectures like Retrieval-Augmented Generation (RAG). In RAG, a knowledge base is essential, and vector databases step in to store all the sources of truth. They convert this information into embeddings and perform similarity searches to retrieve the most relevant documents. In a nutshell, vector databases become the knowledge base for your LLM, acting as an external long-term memory.Now that we've established the importance of vector databases let's explore your options. The market offers a variety of vector databases to suit different use cases. Among the prominent contenders, three stand out: Milvus, Pinecone, and, Qdrant.Milvus and Pinecone are renowned for their exceptional performance. Milvus, based on the Faiss library, offers a highly optimized vector similarity search. It's a powerhouse for demanding applications. Pinecone, on the other hand, is a cloud-based vector database designed for real-time similarity searches. Both of these options excel in speed and reliability, making them ideal for intensive use cases.If you're on the lookout for a free and open-source vector storage database, Qdrant is a compelling choice. However, it's not as fast or scalable as Milvus and Pinecone. Qdrant is a valuable option when you need an economical solution without sacrificing core functionality. You can check another article about Qdrant here.Scalability is a crucial factor when considering vector databases, especially for large-scale applications. Milvus stands out for its ability to scale horizontally to handle billions of vectors and thousands of queries per second. Pinecone, being cloud-based, automatically scales as your needs grow. Qdrant, as mentioned earlier, may not be the go-to option for extreme scalability.In this article, we’ll dive deeper into Pinecone. We’ll discuss anything you need to know about Pinecone, starting from Pinecone’s architecture, how to set it up, the pricing, and several examples of how to use Pinecone. Without wasting any more time, let’s take a deep breath, make yourselves comfortable, and be ready to learn all you need to know about Pinecone!Getting to Know PineconeIndexes are at the core of Pinecone's functionality. They serve as the repositories for your vector embeddings and metadata. Each project can have one or more indexes. The structure of an index is highly flexible and can be tailored to your specific use case and resource requirements.Pods are the units of cloud resources that provide storage and compute for each index. They are equipped with vCPU, RAM, and disk space to handle the processing and storage needs of your data. The choice of pod type is crucial as it directly impacts the performance and scalability of your Pinecone index.When it comes to selecting the appropriate pod type, it's essential to align your choice with your specific use case. Pinecone offers various pod types designed to cater to different resource needs. Whether you require substantial storage capacity, high computational power, or a balance between the two, there's a pod type that suits your requirements.As your data grows, you have the flexibility to scale your storage capacity. This can be achieved by increasing the size and number of pods allocated to your index. Pinecone ensures that you have the necessary resources to manage and access your expanding dataset efficiently.In addition to scaling storage, Pinecone allows you to control throughput as well. You can fine-tune the performance of your index by adding replicas. These replicas can help distribute the workload and handle increased query traffic, providing a seamless and responsive experience for your end users.Setting Up PineconePinecone is not only available in Python, it’s also available in TypeScript/Node clients. In this article, we’ll focus only on the python-client of Pinecone. Setting Pinecone in Python is very straightforward. We just need to install it via pip, like the following.pip3 install pinecone-clientOnce it’s installed, we can directly exploit the power of Pinecone. However, remember that Pinecone is a commercial product, so we need to put our API key when using Pinecone.import pinecone pinecone.init(api_key="YOUR_API_KEY",              environment="us-west1-gcp")Get Started with PineconeFirst thing first, we need to create an index in Pinecone to be able to exploit the power of vector database. Remember that vector database is basically storing embedding or vectors inside it. Thus, configuring the dimensions of vectors and the distance metric to be used is important when creating an index. The provided commands showcase how to create an index named "hello_pinecone" for performing an approximate nearest-neighbor search using the Cosine distance metric for 10-dimensional vectors. Creating an index usually takes around 60 seconds.pinecone.create_index("hello_pinecone", dimension=10, metric="cosine")Once the index is created, we can get all the information about the index by calling the `.describe_index()` method. This includes configuration information and the deployment status of the index. The operation requires the name of the index as a parameter. The response includes details about the database and its status.index_description = pinecone.describe_index("hello_pinecone")You can also check what are the created indices by calling the `.list_indexes()` method.active_indexes = pinecone.list_indexes()Creating an index is just the first step before you can insert or query the data. The next step involves creating a client instance that targets the index you just created.index = pinecone.Index("hello_pinecone")Once the client instance is created, we can start inserting any relevant data that we want to store. To ingest vectors into your index, you can use the "upsert" operation, which allows you to insert new records into the index or update existing records if a record with the same ID is already present. Below are the commands to upsert five 10-dimensional vectors into your index.index.upsert([    ("A", [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]),    ("B", [0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2]),    ("C", [0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3]),    ("D", [0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4]),    ("E", [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5]) ])We can also check the statistics of our index by running the following command.index.describe_index_stats() # Returns: # {'dimension': 10, 'index_fullness': 0.0, 'namespaces': {'': {'vector_count': 5}}}Now, we can start interacting with our index. Let’s perform a query operation by giving a vector of 10-dimensional as the query and return the top-3 most similar vectors from the index. Note that Pinecone will judge the similarity between the query vector and each of the data in the index based on the provided similarity metric during the index creation, where in this case, it’ll use the Cosine metric.index.query( vector=[0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6], top_k=3, include_values=True ) # Returns: # {'matches': [{'id': 'E', #               'score': 0.0, #               'values': [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5]}, #              {'id': 'D', #               'score': 0.0799999237, #               'values': [0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4]}, #              {'id': 'C', #               'score': 0.0800000429, #               'values': [0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3]}], #  'namespace': ''}Finally, if you are done using the index, you can also delete the index by running the following command.pinecone.delete_index("hello_pinecone")Pinecone PricingPinecone offers both a paid licensing option and a free-tier plan. The free-tier plan is an excellent starting point for those who want to dip their toes into Pinecone's capabilities without immediate financial commitment. However, it comes with certain limitations. Under this plan, users are restricted to creating one index and one project.For users opting for the paid plans, hourly billing is an important aspect to consider. The billing is determined by the per-hour price of a pod, multiplied by the number of pods that your index uses. Essentially, you will be charged based on the resources your index consumes, and this billing structure ensures that you only pay for what you use.It's important to note that, regardless of the activity on your indexes, you will be sent an invoice at the end of the month. This invoice is generated based on the total minutes your indexes have been running. Pinecone's billing approach ensures transparency and aligns with your actual resource consumption.ConclusionCongratulations on keeping up to this point! Throughout this article, you have learned all you need to know about Pinecone, starting from Pinecone’s architecture, how to set it up, the pricing, and several examples of how to use Pinecone. I wish the best for your experiment in creating your vector database with Pinecone and see you in the next article!Author BioLouis Owen is a data scientist/AI engineer from Indonesia who is always hungry for new knowledge. Throughout his career journey, he has worked in various fields of industry, including NGOs, e-commerce, conversational AI, OTA, Smart City, and FinTech. Outside of work, he loves to spend his time helping data science enthusiasts to become data scientists, either through his articles or through mentoring sessions. He also loves to spend his spare time doing his hobbies: watching movies and conducting side projects. Currently, Louis is an NLP Research Engineer at Yellow.ai, the world’s leading CX automation platform. Check out Louis’ website to learn more about him! Lastly, if you have any queries or any topics to be discussed, please reach out to Louis via LinkedIn.
Read more
  • 0
  • 0
  • 11730
article-image-generative-fill-with-adobe-firefly-part-ii
Joseph Labrecque
24 Aug 2023
9 min read
Save for later

Generative Fill with Adobe Firefly (Part II)

Joseph Labrecque
24 Aug 2023
9 min read
Adobe Firefly OverviewAdobe Firefly is a new set of generative AI tools which can be accessed via https://firefly.adobe.com/ by anyone with an Adobe ID. To learn more about Firefly… have a look at their FAQ.  Image 1: Adobe FireflyFor more information about the usage of Firefly to generate images, text effects, and more… have a look at the previous articles in this series:      Animating Adobe Firefly Content with Adobe Animate      Exploring Text to Image with Adobe Firefly      Generating Text Effects with Adobe Firefly       Adobe Firefly Feature Deep Dive       Generative Fill with Adobe Firefly (Part I)This is the conclusion of a two-part article. You can catch up by reading Generative Fill with Adobe Firefly (Part I). In this article, we’ll continue our exploration of Firefly with the Generative fill module by looking at how to use the Insert and Replace features… and more.Generative Fill – Part I RecapIn part I of our Firefly Generative fill exploration, we uploaded a photograph of a cat, Poe, to the AI and began working with the various tools to remove the background and replace it with prompt-based generative AI content.Image 2: The original photograph of PoeNote that the original photograph includes a set of electric outlets exposed within the wall. When we remove the background, Firefly recognizes that these objects are distinct from the general background and so retains them.Image 3: A set of backgrounds is generated for us to choose fromYou can select any of the four variations that were generated from the set of preview thumbnails beneath the photograph.Again, if you’d like to view these processes in detail – check out Generative Fill with Adobe Firefly (Part I).Insert and Replace with Generative FillWe covered generating a background for our image in part I of this article. Now we will focus on other aspects of Firefly Generative fill, including the Remove and Insert tools.Consider the image above and note that the original photograph included a set of electric outlets exposed within the wall. When we removed the background in part I, Firefly recognized that they were distinct from the general background and so retained them. The AI has taken them into account when generating the new background… but we should remove them.This is where the Remove tool comes into play.Image 4: The Remove toolSwitching to the Remove tool will allow you to brush over an area of the photograph you’d like to remove. It fills in the removed area with pixels generated by the AI to create seamless removal.1.               Select the Remove tool now. Note that when switching between the Insert and Remove tools, you will often encounter a save prompt as seen below. If there are no changes to save, this prompt will not appear!Image 5: When you switch tools… you may be asked to save your work2.               Simply click the Save button to continue – as choosing the Cancel button will halt the tool selection.3.               With the Remove tool selected, you can adjust the Brush Settings from the toolbar below the image, at the bottom of the screen.Image 6: The Brush Settings overlay4.               Zoom in closer to the wall outlet and brush over the area by clicking and dragging with your mouse. The size of your brush, depending upon brush settings, will appear as a circular outline. You can change the size of the brush by tapping the [ or] keys on your keyboard.Image 7: Brushing over the wall outlet with the Remove tool5.               Once you are happy with the selection you’ve made, click the Remove button within the toolbar at the bottom of the screen.Image 8: The Remove button appears within the toolbar6.               The Firefly AI uses Generative fill to replace the brushed-over area with new content based upon the surrounding pixels. A set of four variations appears below the photograph. Click on each one to preview – as they can vary quite a bit.Image 9: Selecting a fill variant7.               Klick the Keep button in the toolbar to save your selection and continue editing. Remember – if you attempt to switch tools before saving… Firefly will prompt you to save your edits via a small overlay prompt.The outlet has now been removed and the wall is all patched up.Aside from the removal of objects through Generative fill, we can also perform insertions based on text prompts. Let’s add some additional elements to our photograph using these methods.  1.               Select the Insert tool from the left-hand toolbar.2.               Use it in a similar way as we did the Remove tool to brush in a selection of the image. In this case, we’ll add a crown to Poe’s head – so brush in an area that contains the top of his head and some space above it. Try and visualize a crown shape as you do this.3.               In the prompt input that appears beneath the photograph, type in a descriptive text prompt similar to the following: “regal crown with many jewels”Image 10: A selection is made, and a text prompt inserted4.               Click the Generate button to have the Firefly AI perform a Generative fill insertion based upon our text prompt as part of the selected area.Image 11: Poe is a regal cat5.               A crown is generated in accordance with our text prompt and the surrounding area. A set of four variations to choose from appears as well. Note how integrated they appear against the original photographic content.6.               Click the Keep button to commit and save your crown selection.7.               Let’s add a scepter as well. Brush the general form of a scepter across Poe’s body extending from his paws to his shoulder.8.               Type in the text prompt: “royal scepter”Image 12: Brushing in a scepter shape9.               Click the Generate button to have the Firefly AI perform a Generative fill insertion based upon our text prompt as part of the selected area.Image 13: Poe now holds a regal scepter in addition to his crown10.            Remember to choose a scepter variant and click the Keep button to commit and save your scepter selection.Okay! That should be enough regalia to satisfy Poe. Let’s download our creation for distribution or use in other software.Downloading your ImageClick the Download button in the upper right of the screen to begin the download process for your image.Image 14: The Download buttonAs Firefly begins preparing the image for download, a small overlay dialog appears.Image 15: Content credentials are applied to the image as it is downloadedFirefly applies metadata to any generated image in the form of content credentials and the image download process begins.Once the image is downloaded, it can be viewed and shared just like any other image file.Image 16: The final image from our exploration of Generative fillAlong with content credentials, a small badge is placed upon the lower right of the image which visually identifies the image as having been produced with Adobe Firefly.That concludes our set of articles on using Generative fill to remove and insert objects into your images using the Adobe Firefly AI. We have a number of additional articles on Firefly procedures on the way… including Generative recolor for vector artwork!Author BioJoseph Labrecque is a Teaching Assistant Professor, Instructor of Technology, University of Colorado Boulder / Adobe Education Leader / Partner by DesignJoseph is a creative developer, designer, and educator with nearly two decades of experience creating expressive web, desktop, and mobile solutions. He joined the University of Colorado Boulder College of Media, Communication, and Information as faculty with the Department of Advertising, Public Relations, and Media Design in Autumn 2019. His teaching focuses on creative software, digital workflows, user interaction, and design principles and concepts. Before joining the faculty at CU Boulder, he was associated with the University of Denver as adjunct faculty and as a senior interactive software engineer, user interface developer, and digital media designer.Labrecque has authored a number of books and video course publications on design and development technologies, tools, and concepts through publishers which include LinkedIn Learning (Lynda.com), Peachpit Press, and Adobe. He has spoken at large design and technology conferences such as Adobe MAX and for a variety of smaller creative communities. He is also the founder of Fractured Vision Media, LLC, a digital media production studio and distribution vehicle for a variety of creative works.Joseph is an Adobe Education Leader and member of Adobe Partners by Design. He holds a bachelor’s degree in communication from Worcester State University and a master’s degree in digital media studies from the University of Denver.Author of the book: Mastering Adobe Animate 2023
Read more
  • 0
  • 0
  • 11558

article-image-google-bard-for-finance
Anshul Saxena
07 Nov 2023
7 min read
Save for later

Google Bard for Finance

Anshul Saxena
07 Nov 2023
7 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionHey there, financial explorers!Ever felt overwhelmed by the vast sea of investment strategies out there? You're not alone. But amidst this overwhelming ocean, one lighthouse stands tall: Warren Buffett. The good news? We've teamed up with Google Bard to break down his legendary value-investing approach into bite-sized, actionable prompts. Think of it as your treasure map, leading you step-by-step through the intricate maze of investment wisdom that Buffett has championed over the years.Decoding Smart Investing: A Buffett-Inspired GuideLet's dive straight into the art of smart investing, inspired by the one and only Warren Buffett. First things first: get to know the business you're eyeing. What's their main product, and why's it special? How's their industry doing, and who are the big names in their field? It's crucial to grasp how they earn their bucks. Next, roll up your sleeves and peek into their financial health. Check out their revenues, costs, profits, and some essential numbers that give you the real picture. Now, who's steering the ship? Understand the team's past decisions, how they chat with shareholders, and if their interests align with the company's success.But wait, there's more! Every company has something that makes them stand out, be it their brand, cost efficiency, or even special approvals that keep competitors at bay. And before you take the plunge, make sure you know what the company's truly worth and if its future looks bright. We're talking about its real value and what lies ahead in terms of growth and potential hiccups.Ready to dive deep? Let's get started!Step 1. Understand the BusinessProduct or Service: Start by understanding the core product or service of the company. What do they offer, and how is it different from competitors?Industry Overview: Understand the industry in which the company operates. What are the industry's growth prospects? Who are the major players?Business Model: Dive deep into how the company makes money. What are its main revenue streams?Step 2. Analyze Financial HealthIncome Statement: Look at the company's revenues, costs, and profits over time.Balance Sheet: Examine assets, liabilities, and shareholders' equity to assess the company's financial position.Cash Flow Statement: Understand how money moves in and out of the company. Positive cash flow is a good sign.Key Ratios: Calculate and analyze ratios like Price-to-Earnings (P/E), Debt-to-Equity, Return on Equity (ROE), and others.Step 3. Management QualityTrack Record: What successes or failures has the current management team had in the past?Shareholder Communication: Buffett values management teams that communicate transparently and honestly with shareholders.Alignment: Do the management's interests align with shareholders? For instance, do they own a significant amount of stock in the company?Step 4. Competitive Advantage (or Moat)Branding: Does the company have strong brand recognition or loyalty?Cost Advantages: Can the company produce goods or services more cheaply than competitors?Network Effects: Do more users make the company's product or service more valuable (e.g., Facebook or Visa)?Regulatory Advantages: Does the company have patents, licenses, or regulatory approvals that protect it from competition?Step 5. ValuationIntrinsic Value: Estimate the intrinsic value of the company. Buffett often uses the discounted cash flow (DCF) method.The margin of Safety: Aim to buy at a price significantly below the intrinsic value to provide a cushion against unforeseen negative events or errors in valuation.Step 6. Future ProspectsGrowth Opportunities: What are the company's prospects for growth in the next 5-10 years?Risks: Identify potential risks that could derail the company's growth or profitability.Now let’s prompt our way towards making smart decisions using Google Bard. In this case, we have taken Google as a use case1. Understand the BusinessProduct or Service: "Describe the core product or service of the company. Highlight its unique features compared to competitors."Industry Overview: "Provide an overview of the industry the company operates in, focusing on growth prospects and key players."Business Model: "Explain how the company earns revenue. Identify its main revenue streams."2. Analyze Financial HealthIncome Statement: "Summarize the company's income statement, emphasizing revenues, costs, and profits trends."Balance Sheet: "Analyze the company's balance sheet, detailing assets, liabilities, and shareholder equity."Cash Flow Statement: "Review the company's cash flow. Emphasize the significance of positive cash flow."Key Ratios: "Calculate and interpret key financial ratios like P/E, Debt-to-Equity, and ROE."3. Management QualityTrack Record: "Evaluate the current management's past performance and decisions."Shareholder Communication: "Assess the transparency and clarity of management's communication with shareholders."Alignment: "Determine if management's interests align with shareholders. Note their stock ownership." 4. Competitive Advantage (or Moat)Branding: "Discuss the company's brand strength and market recognition."Cost Advantages: "Examine the company's ability to produce goods/services at a lower cost than competitors."Network Effects: "Identify if increased user numbers enhance the product/service's value."Regulatory Advantages: "List any patents, licenses, or regulatory advantages the company holds."5. ValuationIntrinsic Value: "Estimate the company's intrinsic value using the DCF method."The Margin of Safety: "Determine the ideal purchase price to ensure a margin of safety in the investment."6. Future ProspectsGrowth Opportunities: "Predict the company's growth potential over the next 5-10 years."Risks: "Identify and elaborate on potential risks to the company's growth or profitability."These prompts should guide an individual through the investment research steps in the manner of Warren Buffett.ConclusionWell, that's a wrap! Remember, the journey of investing isn't a sprint; it's a marathon. With the combined wisdom of Warren Buffett and the clarity of Google Bard, you're now armed with a toolkit that's both enlightening and actionable. Whether you're just starting out or looking to refine your investment compass, these prompts are your trusty guide. So, here's to making informed, thoughtful decisions and charting a successful course in the vast world of investing. Happy treasure hunting!Author BioDr. Anshul Saxena is an author, corporate consultant, inventor, and educator who assists clients in finding financial solutions using quantum computing and generative AI. He has filed over three Indian patents and has been granted an Australian Innovation Patent. Anshul is the author of two best-selling books in the realm of HR Analytics and Quantum Computing (Packt Publications). He has been instrumental in setting up new-age specializations like decision sciences and business analytics in multiple business schools across India. Currently, he is working as Assistant Professor and Coordinator – Center for Emerging Business Technologies at CHRIST (Deemed to be University), Pune Lavasa Campus. Dr. Anshul has also worked with reputed companies like IBM as a curriculum designer and trainer and has been instrumental in training 1000+ academicians and working professionals from universities and corporate houses like UPES, CRMIT, and NITTE Mangalore, Vishwakarma University, Pune & Kaziranga University, and KPMG, IBM, Altran, TCS, Metro CASH & Carry, HPCL & IOC. With a work experience of 5 years in the domain of financial risk analytics with TCS and Northern Trust, Dr. Anshul has guided master's students in creating projects on emerging business technologies, which have resulted in 8+ Scopus-indexed papers. Dr. Anshul holds a PhD in Applied AI (Management), an MBA in Finance, and a BSc in Chemistry. He possesses multiple certificates in the field of Generative AI and Quantum Computing from organizations like SAS, IBM, IISC, Harvard, and BIMTECH.Author of the book: Financial Modeling Using Quantum Computing
Read more
  • 0
  • 0
  • 11306

article-image-streamlining-business-insights-with-dataverse-fabric
Adeel Khan
11 Jan 2024
9 min read
Save for later

Streamlining Business Insights with Dataverse & Fabric

Adeel Khan
11 Jan 2024
9 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionAI and data analysis are critical for businesses to gain insights, optimize processes, and deliver value to their customers. However, working with data can be challenging, especially when it comes from various sources, formats, and systems. That’s why Microsoft offers two powerful platforms that can help you simplify your data and analytics needs: Microsoft Dataverse and Microsoft Fabric.Microsoft Dataverse is a smart, secure, and scalable low-code data platform that lets you create and run applications, flows, and intelligent agents with common tables, extended attributes, and semantic meanings. Microsoft Fabric is a unified platform that can meet your organization’s data and analytics needs. It integrates data lake, data engineering, and data integration from Power BI, Azure Synapse, and Azure Data Factory into a single SaaS experience enabling a single layer for managing data across the enterprise as well as enabling data scientists to apply models using data from these sources effectively.In this article, we will learn how to consume data from dataverse, prepare our data for ML experiments using other data sources, apply a machine learning model, and finally create an outcome that can be again by users of dataverse.Business ScenarioIn this blog, we will try to solve problems for retail merchants selling products such as wine, fruits , meat , fish, etc through web and in-store experience. the marketing department generates monthly sales offers but also wants to ensure these offers land to customers based on the revenue they generate throughout the year. While it is easy for existing customers with long-term relations, but many new customers would remain outside of such business segmentation for 12 months which makes it harder for marketing to engage. So we will help the marketing team to get predicted revenue available for all customers so they can plan the offers and realize the true potential with each customer.Getting the environment readyLet's understand where would be our data and how we can prepare for the use case. The marketing team in this case uses Microsoft dynamics Marketing solution . so customer information is stored in dataverse and in a table called “Contact”. The transaction information is stored in an e-commerce solution hosted in Azure and using Azure SQL as data storage. To replicate the environment we need to perform the following actions1. Download and deploy the “contact table solution” in the targeted dataverse environment. follow the steps defined at Microsoft Learn. The solution will deploy changes to the core contact table and a power automation flow that will be required to populate the data.2. Download “DataFile_Master” and place the file in one drive for business in the same environment as dataverse.3. Open power to automate flow “ArticleFlow_loadCustomerData” deployed through the solution file and update the file location at the following action and save.Figure 1 - List action from Power Automate Cloud Flow4. Run the flow so data required for simulation is uploaded in the contact table. After the execution of the flow, you should have the following data rows added to the contact table.Figure 2 - Data Created in Contact Table of Dataverse5. Let's set up a link between dataverse and Microsoft Fabric. This can be initiated by launching the Power App maker view and clicking on Analyze> Link to Microsoft Fabric link. Please refer to the documentation if you are performing this action for the first time. This will allow access to dataverse tables in Microsoft Fabric for all data science activities without any ETL needs.6. The last step of setting up the environment is to bring the transaction summary information to our Fabric. Here we will import the information in the Microsoft Fabric workspace created with dataverse synchronization. First, we will launch Lakehouse from workspace , and choose the one created with dataverse synchronization. The name would start from dataverse_<environment> where the environment is the name of the environment of dataverse.Figure 3- Microsoft Fabric Auto created Workspace and Lakehouse for dataverse7. This will bring you to the Lakehouse Explorer view where you can view all the dataverse tables. Now from the menu select “Get Data > Upload files” to bring our transaction summary file to Lakehouse. In a real scenario , this can also be done through many features offered by data factory  Figure 4 - Action to upload transaction file in Lakehouse Explorer8. Download a ready file “RetailStoreTxnSummary_01” and select the file in a prompt. This will upload the file under the file folder of Lakehouse.9. Click on three dots with the file and choose load to table > New table. This will show a prompt and ask for the table name , let's keep it the same as the file name. this will create a table with summary information.Figure 5 - Action to convert transaction summary file to Delta tableWe have completed the steps of data preparation and started preparing for the machine learning experiment. With the completion of the above action, our lakehouse now has a contact table, mirrored from dataverse and retailstoretxnsummary_01table created from CSV file. We can now move to the next step of preparing data for the machine learning model.Preparing data for machine learning modelAs discussed in our business problem, we will be using the data to create a linear regression model. Our intention is to use the customer information and transactional information to predict expected revenue from customers in a 12-month cycle.  Now that we have our data available in fabric, we can use Notebook at Fabric to create a joint table that will be used as a source of testing and training the model.  we will use the following code snippet to join the two tables and save them in the temp test_train_data table.  You can download the ready notebook file “01-Createtable” and upload using Open Notebook > Existing Notebook or create a new one from Open Notebook > New Notebook.Figure 6 - a snapshot of the notebook "01-CreateTable"Creating and registering ML modelIn this step we will now use the data in our table “tbl_temp_traintestdata” and build a linear regression model.  we are using linear regression as it is simple and easy to learn as well as suitable for numerical predictions. We will use the following PySpark code, you can also download the ready file “02-CreateMLMode”.Analyse the model performanceLet’s quickly assess the model performance using 3 evaluation criteria. firstly we use RMSE (Root Mean Squared Error), the training data is approximately 9535.31, and for the test data, it’s approximately 8678.23. The lower the RMSE, the better the model’s predictions. It’s worth noting that the RMSE is sensitive to outliers, meaning that a few large errors can significantly increase the RMSE. Second, we used R2 (R-squared). the R2 for the training data is approximately 0.78 (or 78%), and for the test data, it’s approximately 0.82 (or 82%). This means that about 78% of the variability in the training data and 82% of the variability in the test data can be explained by the model. This model seems to be performing reasonably well, as indicated by the relatively high R2 values. However, the RMSE values suggest that there may be some large errors, possibly due to outliers in the data. Lastly, we analyzed the coefficients to identify the importance of the selected features , the results were the following:Figure 7 - Coefficients of featuresWe identified that MntDairyProds (42.06%)  has the highest coefficient, meaning it has the most significant influence on the model’s predictions. A change in this feature will have a substantial impact on the predicted crffa_revenue. However, MntBeverageProds (0.05%) and others under 0 indicate they have less influence on the model’s predictions. However, they still contribute to the model’s ability to accurately predict crffa_revenue.Since data cleansing is not in our blog scope, we will accept these results and move to the next step which is using this model to perform batch prediction on new customer profiles.Using the model for batch predictionWe will now use the registered model to batch-predict new customer profiles. We will fetch the rows with the current revenue is zero and pass the dataset for prediction. Finally, we will save the records so we can post back to dataverse. A ready file “03– UseModelandSavePredictions” can be downloaded or the following code snippet can be copied.After execution of this code, we will have a new table in oneLake “” with predicted revenue. This table can be then synchronized back to dataverse using virtual table configuration and used in the table.Figure 8 - Output snapshot from OneLake SQL analytics endpointThere are detailed instructions are available at Microsoft Learn that can make the prediction table available in dataverse so the marketing team can use the predicted values for segmentation or personalized offeringsFigure 9 virtual table configured in Dataverse.ConclusionIn this blog, we have learned how to use Microsoft Fabric to create a regression model for predicting customer revenue and leveraging data from Dataverse and other sources. We have also seen how to consume the model through simple PySpark code. Moreover, we have explored how to integrate the model with Power Platform, opening up many new possibilities.This is just one example of how this platform can accelerate the work together to enable business scenarios that traditionally would have required data scientist and many days of engineering. There are many more possibilities and benefits of using these tools, such as scalability, security, governance, and collaboration. We encourage you to experiment and explore both dataverse and fabric to discover new ways of solving your business problems and creating value for your customers.Author BioMohammad Adeel Khan is a Senior Technical Specialist at Microsoft. A seasoned professional with over 19 years of experience with various technologies and digital transformation projects. At work , he engages with enterprise customers across geographies and helps them accelerate digital transformation using Microsoft Business Applications , Data, and AI solutions. In his spare time, he collaborates with like-minded and helps solve business problems for Nonprofit organizations using technology.  Adeel is also known for his unique approach to learning and development. During the COVID-19 lockdown, he introduced his 10-year-old twins to Microsoft Learn. The twins not only developed their first Microsoft Power Platform app—an expense tracker—but also became one of the youngest twins to earn the Microsoft Power Platform certification. 
Read more
  • 0
  • 0
  • 11104
article-image-efficient-data-caching-with-vector-datastore-for-llms
Karthik Narayanan Venkatesh
25 Oct 2023
9 min read
Save for later

Efficient Data Caching with Vector Datastore for LLMs

Karthik Narayanan Venkatesh
25 Oct 2023
9 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionIn the ever-evolving landscape of artificial intelligence, Large Language Models (LLMs) have taken center stage, transforming the way we interact with and understand vast amounts of textual data. With the proliferation of these advanced models, the need for efficient data management and retrieval has become paramount. Enter Vector Datastore, a game-changer in the realm of data caching for LLMs. This article explores how Vector Datastore's innovative approach, based on vector representations and similarity search, empowers LLMs to swiftly access and process data, revolutionizing their performance and capabilities.How does Vector datastore enable data cache for LLMs?With every online source you scan, you will come across terms like chatbots, LLMs, or GPT. Most people are speaking about the large language models, and we can see every new language model gets released every week.Before seeing how vector databases enable data caches for large language models, one must learn about them and their importance to the language models.Vector databases: What are they? Getting an idea of vector embeddings is essential to know about vector databases. It is a data representation that consists of semantic information, helping the artificial intelligence system better understand all the datasets. At the same time, it helps to maintain long-term memory. The most critical element is understanding and remembering, especially if you want to learn anything new.AI models usually generate embeddings. Every large language model consists of a variety of features. Due to this reason, it becomes difficult to manage their representations. With the help of embeddings, one could represent the various dimensions of the data. Therefore, the artificial intelligence models would understand the patterns, relationships, and hidden structures.In this scenario, the vector embeddings that use the traditional scalar-based databases could be challenging. It cannot keep up or handle the scale and complexity of various data. The complexities that often come with vector embeddings would require a specialized database. It is the reason why one would need vector databases. With the help of vector databases, one could get optimized storage and query capabilities of any unique structure presented by vector embeddings. As a result, the user would get high performance along with easy search capabilities, data retrieval, and scalability only by comparing the similarities and values of findings between one another.Though vector databases are difficult to implement, until now, various tech giants companies are not only developing them but also making them manageable. Since they are expensive to implement, one must ensure proper calibration to receive high performance. How it works?Taking the simple example of a link with a large language model like chat GPT, we know it consists of a large volume of data and content. At the same time, the user can only proceed with the chat GPT application. Being the user, one has to improve your queries in the application. Once you complete this step, the query gets inserted into the embedding model. It initiates the process of vector embeddings based on the content that requires indexing. After completing this process, the vector embeddings move into the vector databases. It usually occurs regarding the content that wants to be used for embedding.As a result, you will receive the outcome produced by vector databases. Therefore, the system sends it back to the user as a result.As a user continues making different queries, it goes through the same embedding model that helps create embeddings. It helps in processing the database query for similar—vector embeddings.Let us know the whole process in detail.A vector database incorporates diverse algorithms dedicated to facilitating Approximate Nearest Neighbor (ANN) searches. These algorithms encompass techniques such as hashing, graph-based search, and quantization, which are intricately combined into a structured pipeline for retrieving neighboring vectors concerning a queried input.The outcomes of this search operation are contingent upon the proximity or approximation of the retrieved vectors to the original query. Hence, the pivotal factors under consideration are accuracy and speed. A trade-off exists between the query output's speed and the results' precision; a slower output corresponds to a more accurate outcome.The process of querying a vector database unfolds in three fundamental stages: 1. IndexingA diverse array of algorithms comes into play upon the ingress of the vector embedding into the vector database. These algorithms serve the purpose of mapping the vector embedding onto specific data structures, thus optimizing the search process. This preparatory step significantly enhances the speed and efficiency of subsequent searches. 2. QueryingThe vector database systematically compares the queried vector and the previously indexed vectors. This comparison entails the application of a similarity metric, a crucial determinant in identifying the nearest neighbor amidst the indexed vectors. The precision and efficacy of this phase are paramount to the overall accuracy of the search results. 3. Post ProcessingUpon pinpointing the nearest neighbor, the vector database initiates a post-processing stage. The specifics of this stage may vary based on the particular vector database in use. Post-processing activities may involve refining the final output of the query, ensuring it aligns seamlessly with the user's requirements.Additionally, the vector database might undertake the task of re-ranking the nearest neighbors, a strategic move to enhance the database's future search capabilities. This meticulous post-processing step guarantees that the delivered results are accurate and optimized for subsequent reference, thereby elevating the overall utility of the vector database in complex search scenarios.Implementing vector data stores in LLM Let us consider an example to understand how a vector data store can be installed or implemented in a large language model. Before we can start with the implementation, one has to install the vector datastore library. pip install vectordatastore Assuming you have the data set containing the text snippets, you will get the following in code format. # Sample dataset dataset = {    "1": "Text snippet 1",    "2": "Text snippet 2",    # ... more data points ... } # Initialize Vector Datastore from vectordatastore import VectorDatastore vector_datastore = VectorDatastore() # Index data into Vector Datastore for key, text in dataset.items():    vector_datastore.index(key, text) # Query Vector Datastore from LLM query = "Query text snippet" similar_texts = vector_datastore.query(query) # Process similar_texts in the LLM # ...In this example, one can see that the vector data store efficiently indexes the datasets while utilizing vector representations. When the large language model requires retrieving data similar to the query text, it often uses a vector datastore to obtain the relevant snippets quickly.Process of enabling data caches in LLMsVector Datastore enables efficient data caching for Large Language Models (LLMs) through its unique approach to handling data. Traditional caching mechanisms store data based on keys, and retrieving data involves matching these keys. However, LLMs often work with complex and high-dimensional data, such as text embeddings, which are not easily indexed or retrieved using traditional key-value pairs. Vector Datastore addresses this challenge by leveraging vector representations of data points.Process of how Vector Datastore enables data cache for LLMs 1. Vector Representation:Vector Datastore stores data points in vectorized form. Each data point, whether a text snippet or any other type of information, is transformed into a high-dimensional numerical vector. This vectorization process captures the semantic meaning and relationships between data points. 2. Similarity Search: Instead of relying on exact matches of keys, Vector Datastore performs similarity searches based on vector representations. When an LLM needs specific data, it translates its query into a vector representation using the same method employed during data storage. This query vector is then compared against the stored vectors using similarity metrics like cosine similarity or Euclidean distance. 3. Efficient Retrieval: By organizing data as vectors and employing similarity searches, Vector Datastore can quickly identify the most similar vectors to the query vector. This efficient retrieval mechanism allows LLMs to access relevant data points without scanning the entire dataset, significantly reducing the time required for data retrieval. 4. Adaptive Indexing: Vector Datastore dynamically adjusts its indexing strategy based on the data and queries it receives. As the dataset grows or the query patterns change, Vector Datastore adapts its indexing structures to maintain optimal search efficiency. This adaptability ensures the cache remains efficient even as the data and query patterns evolve. 5. Scalability: Vector Datastore is designed to handle large-scale datasets commonly encountered in LLM applications. Its architecture allows horizontal scaling, efficiently distributing the workload across multiple nodes or servers. This scalability ensures that Vector Datastore can accommodate the vast amount of data processed by LLMs without compromising performance.Vector Datastore's ability to work with vectorized data and perform similarity searches based on vector representations enables it to serve as an efficient data cache for Large Language Models. By avoiding the limitations of traditional key-based caching mechanisms, Vector Datastore significantly enhances the speed and responsiveness of LLMs, making it a valuable tool in natural language processing.ConclusionThe development of LLM is one of the crucial technological advancements of our time. Not only does it have the potential to revolutionize various aspects of our lives, but at the same time, it is imperative on our part to utilize them ethically and responsibly to retrieve all its benefits.Author BioKarthik Narayanan Venkatesh (aka Kaptain), founder of WisdomSchema, has multifaceted experience in the data analytics arena. He has been associated with the data analytics domain since the early 2000s, with a ringside view of transformations in this industry. He has led teams that architected and built scalable data platform solutions across the technology spectrum.As a niche consulting provider, he bridged the gap between business and technology and drove BI adoption through innovative approaches in an agnostic manner. He is a sought-after speaker who has presented many lectures on SAP, Analytics, Snowflake, AWS, and GCP technologies.
Read more
  • 0
  • 0
  • 11036

article-image-create-an-ai-powered-coding-project-generator
Luis Sobrecueva
22 Jun 2023
8 min read
Save for later

Create an AI-Powered Coding Project Generator.

Luis Sobrecueva
22 Jun 2023
8 min read
OverviewMaking a smart coding project generator can be a game-changer for developers. With the help of large language models (LLM), we can generate entire code projects from a user-provided prompt.In this article, we are developing a Python program that utilizes OpenAI's GPT-3.5 to generate code projects and slide presentations based on user-provided prompts. The program is designed as a command-line interface (CLI) tool, which makes it easy to use and integrate into various workflows. Image 1: Weather App Features Our project generator will have the following features:Generates entire code projects based on user-provided promptsGenerates entire slide presentations based on user-provided prompts (watch a demo here)Uses OpenAI's GPT-3.5 for code generationOutputs to a local project directoryExample Usage Our tool will be able to generate a code project from a user-provided prompt, for example, this line will create a snake game:maiker "a snake game using just html and js"; We can then open the generated project in our browser: open maiker-generated-project/index.htmlImage 2: Generated ProjectImplementation To ensure a comprehensive understanding of the project, let's break down the process of creating the AI-powered coding project generator step by step: 1. Load environment variables: We use the `dotenv` package to load environment variables from a `.env` file. This file should contain your OpenAI API key.from dotenv import load_dotenv load_dotenv()2. Set up OpenAI API client: We set up the OpenAI API client using the API key loaded from the environment variables.import openai openai.api_key = os.getenv("OPENAI_API_KEY")3. Define the `generate_project` function: This function is responsible for generating code projects or slide presentations based on the user-provided prompt. Let's break down the function in more detail.def generate_project(prompt: str, previous_response: str = "", type: str = "code") -> Dict[str, str]: The function takes three arguments:prompt: The user-provided prompt describing the project to be generated.previous_response: A string containing the previously generated files, if any. This is used to avoid generating the same files again if it does more than one loop.type: The type of project to generate, either "code" or "presentation". Inside the function, we first create the system and user prompts based on the input type (code or presentation). if type == "presentation":      # ... (presentation-related prompts) else:      # ... (code-related prompts) For code projects, we create a system prompt that describes the role of the API as a code generator and a user prompt that includes the project description and any previously generated files. For presentations, we create a system prompt that describes the role of the API as a reveal.js presentation generator and a user prompt that includes the presentation description. Next, we call the OpenAI API to generate the code or presentation using the created system and user prompts. completion = openai.ChatCompletion.create(      model="gpt-3.5-turbo",      messages=[    {           "role": "system",           "content": system_prompt,    },    {           "role": "user",           "content": user_prompt,    },      ],      temperature=0, ) We use the openai.ChatCompletion.create method to send a request to the GPT-3.5 model. The `messages` parameter contains an array of two messages: the system message and the user message. The `temperature` parameter is set to 0 to encourage deterministic output. Once we receive the response from the API, we extract the generated code from the response. generated_code = completion.choices[0].message.contentGenerating the files to disk: We then attempt to parse the generated code as a JSON object. If the parsing is successful, we return the parsed JSON object, which is a dictionary containing the generated files and their content. If the parsing fails, we raise an exception with an error message.try:      if generated_code:    generated_code = json.loads(generated_code) except json.JSONDecodeError as e:      raise click.ClickException(    f"Code generation failed. Please check your prompt and try again. Error: {str(e)}, generated_code: {generated_code}"      ) return generated_code This dictionary is then used by the `main` function to save the generated files to the specified output directory.```4. Define the `main` function: This function is the entry point of our CLI tool. It takes a project prompt, an output directory, and the type of project (code or presentation) as input. It then calls the `generate_project` function to generate the project and saves the generated files to the specified output directory.def main(prompt: str, output_dir: str, type: str):      # ... (rest of the code) Inside the main function, we ensure the output directory exists, generate the project, and save the generated files.# ... (inside main function) os.makedirs(output_dir, exist_ok=True) for _loop in range(max_loops):      generated_code = generate_project(prompt, ",".join(generated_files), type)      for filename, contents in generated_code.items():    # ... (rest of the code) 5. **Create a Click command**: We use the `click` package to create a command-line interface for our tool. We define the command, its arguments, and options using the `click.command`, `click.argument`, and `click.option` decorators.import click @click.command() @click.argument("prompt") @click.option(      "--output-dir",      "-o",      default="./maiker-generated-project",      help="The directory where the generated code files will be saved.", ) @click.option('-t', '--type', required=False, type=click.Choice(['code', 'presentation']), default='code') def main(prompt: str, output_dir: str, type: str):      # ... (rest of the code) 6. Run the CLI tool: Finally, we run the CLI tool by calling the `main` function when the script is executed.if __name__ == "__main__":      main() In this article, we have used the`... (rest of the code)` as a placeholder to keep the explanations concise and focused on specific parts of the code. The complete code for the AI-powered coding project generator can be found in the GitHub repository at the following link: https://github.com/lusob/maiker-cliBy visiting the repository, you can access the full source code, which includes all the necessary components and functions to create the CLI tool. You can clone or download the repository to your local machine, install the required dependencies, and start using the tool to generate code projects and slide presentations based on user-provided prompts.   ConclusionWith the current AI-powered coding project generator, you can quickly generate code projects and slide presentations based on user-provided prompts. By leveraging the power of OpenAI's GPT-3.5, you can save time and effort in creating projects and focus on other important aspects of your work. However, it is important to note that the complexity of the generated projects is currently limited due to the model's token limitations. GPT-3.5 has a maximum token limit, which restricts the amount of information it can process and generate in a single API call. As a result, the generated projects might not be as comprehensive or sophisticated as desired for more complex applications. The good news is that with the continuous advancements in AI research and the development of new models with larger context windows (e.g., models with more than 100k context tokens), we can expect significant improvements in the capabilities of AI-powered code generators. These advancements will enable the generation of more complex and sophisticated projects, opening up new possibilities for developers and businesses alike.Author BioLuis Sobrecueva is a software engineer with many years of experience working with a wide range of different technologies in various operating systems, databases, and frameworks. He began his professional career developing software as a research fellow in the engineering projects area at the University of Oviedo. He continued in a private company developing low-level (C / C ++) database engines and visual development environments to later jump into the world of web development where he met Python and discovered his passion for Machine Learning, applying it to various large-scale projects, such as creating and deploying a recommender for a job board with several million users. It was also at that time when he began to contribute to open source deep learning projects and to participate in machine learning competitions and when he took several ML courses obtaining various certifications highlighting a MicroMasters Program in Statistics and Data Science at MIT and a Udacity Deep Learning nano degree. He currently works as a Data Engineer at a ride-hailing company called Cabify, but continues to develop his career as an ML engineer by consulting and contributing to open-source projects such as OpenAI and Autokeras.Author of the book: Automated Machine Learning with AutoKeras
Read more
  • 0
  • 0
  • 10175
Modal Close icon
Modal Close icon