Feed aggregator

Llama 3 Groq 70B Tool Use Model - Local Installation and Function Calling

Pakistan's First Oracle Blog - 15 hours 20 min ago

 This video installs Llama-3-Groq-8B-Tool-Use locally which is specifically designed for advanced tool use and function calling tasks.


Code:


conda create -n groqllama python=3.11 -y && conda activate groqllama

conda install jupyter -y
pip uninstall charset_normalizer -y
pip install charset_normalizer
jupyter notebook

pip install torch transformers sentencepiece accelerate huggingface_hub tavily-python
export TAVILY_API_KEY=""


import transformers
import torch
import os
import re
import json
from tavily import TavilyClient
tavily_client = TavilyClient(api_key=os.getenv('TAVILY_API_KEY'))

import warnings
warnings.filterwarnings('ignore')

model_id = "Groq/Llama-3-Groq-8B-Tool-Use"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

prompt ="""

<|start_header_id|>system<|end_header_id|>

You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"name": <function-name>,"arguments": <args-dict>}
</tool_call>

Here are the available tools:
<tools> {
  "name": "get_current_weather",
  "description": "Get the current weather in a given location",
  "parameters": {
    "type": "object",
    "properties": {
      "location": {
        "type": "string",
        "description": "The city and state, e.g. San Francisco, CA"
      },
      "format": {
        "type": "string",
        "description": "The temperature unit to use. Infer this from the users location.",
        "enum": [
          "celsius",
          "fahrenheit"
        ]
      }
    },
    "required": [
      "location",
      "format"
    ]
  }
} </tools><|eot_id|><|start_header_id|>user<|end_header_id|>

What is the weather like in Sydney in Celsius?<|eot_id|><|start_header_id|>assistant<|end_header_id|>

"""

response = pipeline(prompt)

# Use a regex pattern to find the tool call JSON
generated_text = response[0].get('generated_text', '')

# Use a regex pattern to find the tool call JSON
tool_call_match = re.search(r'\{.*?\}', generated_text, re.DOTALL)

if tool_call_match:
    tool_call_json = tool_call_match.group(0)
   
    # Correctly format the JSON string
    tool_call_json = tool_call_json.replace("<function-name>", "get_current_weather")  # Replace placeholder
    tool_call_json = tool_call_json.replace("<args-dict>", '{"location": "Sydney, NSW", "format": "celsius"}')  # Replace placeholder
    tool_call_json = tool_call_json.replace("'", '"')  # Replace single quotes with double quotes
    tool_call_json = tool_call_json.replace('name:', '"name":')  # Ensure proper quoting for keys
    tool_call_json = tool_call_json.replace('arguments:', '"arguments":')

    # Ensure proper quoting of all parts of the JSON string
    tool_call_json = tool_call_json.replace('"name": get_current_weather', '"name": "get_current_weather"')

    # Debug: Print the extracted JSON string
    #print(f"Extracted JSON: {tool_call_json}")
   
    # Correctly format the JSON string
    try:
        tool_call = json.loads(tool_call_json)
        print(tool_call)
    except json.JSONDecodeError as e:
        print(f"Error decoding JSON: {e}")
        # Debug: Print the exact content that failed to parse
        print(f"Failed JSON content: {tool_call_json}")
else:
    print("No tool call JSON found.")
 

location=tool_call['arguments']['location']
format_unit=tool_call['arguments']['format']
query = f"current weather in {location} in {format_unit}"
response = tavily_client.search(query)
print(response)
Categories: DBA Blogs

Create Your Own Planner with GPT4o Mini Locally

Pakistan's First Oracle Blog - Mon, 2024-07-22 23:42

 This video is a step-by-step easy tutorial to create a generic planner with API calls and Gradio interface by using GPT4o Mini.


Code:

#pip install openai gradio
#export OPENAI_API_KEY=""

import openai
import os
import gradio as gr

client = openai.OpenAI(api_key=os.environ.get('OPENAI_API_KEY'))

def generate_plans(user_query, n=5):
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[
            {"role": "system", "content": "Plan and respond to the user query."},
            {"role": "user", "content": user_query}
        ],
        n=n,
        temperature=0.7,
        max_tokens=500,
        stop=['']
    )
    plans = [choice.message.content for choice in response.choices if choice.message.content.strip() != '']
    if not plans:
        plans = ["Plan A", "Plan B", "Plan C"]  
    return plans

def compare_plans(plan1, plan2):
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[
            {"role": "system", "content": "Choose the better plan."},
            {"role": "user", "content": f"Plan 1: {plan1}\n\nPlan 2: {plan2}\n\nWhich plan is better? Respond with either '1' or '2'."}
        ],
        temperature=0.2,
        max_tokens=10
    )
    return response.choices[0].message.content.strip() if response.choices[0].message.content.strip() != '' else '1'

def evaluate_plans(plans, user_query):
    winners = plans
    while len(winners) > 1:
        next_round = []
        for i in range(0, len(winners), 2):
            if i+1 < len(winners):
                winner = winners[i] if compare_plans(winners[i], winners[i+1]) == '1' else winners[i+1]
            else:
                winner = winners[i]
            next_round.append(winner)
        winners = next_round
    return winners[0] if winners else 'No best plan found'

def generate_response(best_plan, user_query):
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[
            {"role": "system", "content": "Respond to the user query based on the plan."},
            {"role": "user", "content": f"User Query: {user_query}\n\nPlan: {best_plan}\n\nGenerate a detailed response."}
        ],
        temperature=0.5,
        max_tokens=2000
    )
    return response.choices[0].message.content

def improved_ai_output(user_query, num_plans=20):
    plans = generate_plans(user_query, n=num_plans)
    best_plan = evaluate_plans(plans, user_query)
    final_response = generate_response(best_plan, user_query)
    return {
        "user_query": user_query,
        "best_plan": best_plan,
        "final_response": final_response
    }

def chat(query):
    result = improved_ai_output(query)
    return result['final_response']

interface = gr.Interface(
    fn=chat,
    inputs=gr.Textbox(lines=2, placeholder="Ask me anything..."),
    outputs=gr.Textbox(),
    title="My Planner",
    description="Get a personalized plan as per your requirement!"
)

if __name__ == "__main__":
    interface.launch()
Categories: DBA Blogs

Shrinking High Water Mark

Tom Kyte - Mon, 2024-07-22 12:46
I have noticed that the HWM will only go down if the table is truncated. If I do the following will it lower the HWM: CREATE TABLE temp AS SELECT * FROM table_name; TRUNCATE table_name; INSERT INTO table_name SELECT * FROM temp; COMMIT; This has been successful at times in lowering the HWM and other times not. I am wondering why the inconsistency? Is there a better way? I am measuring the used blocks with the following: select count(distinct dbms_rowid.rowid_block_number(rowid) || dbms_rowid.rowid_relative_fno(rowid)) "Used" from table_name; Thanks
Categories: DBA Blogs

Find objects used in a SQL

Tom Kyte - Mon, 2024-07-22 12:46
We have a UI based application where users come in and setup SQL's to get excel based reports back, there are multiple options to chose on the schedule of the SQL execution and other related parameters. It's a free text box, the expectation is that the users will test their SQL in the database before they setup the SQL in this tool. One of our current requirement is to identify dependency of objects in the SQL. if this is the SQL: select a.col1, b.col2, c.col3 from table_a a, view b, pipe_line_function c where a.col1 = b.col2 and b.col2 = c.col3; As a list of dependent objects, the requirement is to get: TABLEA VIEW and the objects within the VIEW until we drill down to the base tables or the most granular. pipe_line_function and the objects within until we drill down to the base tables or the most granular. Is this possible using any new SQL functions, dependency functions etc., without creating a view of the above SQL setup by the users. We are aware about DBA_DEPENDENCIES. it is not possible to create a view and then grab the dependencies and hence this ticket.
Categories: DBA Blogs

RAG Pipeline Tutorial Using Ollama, Triplex, and LangChain On Custom Data Locally

Pakistan's First Oracle Blog - Sun, 2024-07-21 15:10

 This video is a step-by-step guide on building an end-to-end RAG pipeline on your own custom data locally by using Ollama models Triplex and Langchain with GUI in Gradio.



Code:


conda create -n ragpipe python=3.11 -y && conda activate ragpipe

pip install torch sentence_transformers transformers accelerate
pip install langchain==0.1.14
pip install langchain-experimental==0.0.56
pip install langchain-community==0.0.31
pip install faiss-cpu==1.8.0
pip install pdfplumber==0.11.0
pip install gradio==4.25.0
pip install ollama
pip install pypdf
conda install jupyter -y
pip uninstall charset_normalizer -y
pip install charset_normalizer
jupyter notebook

from langchain_community.document_loaders import PDFPlumberLoader
from langchain_experimental.text_splitter import SemanticChunker
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_community.llms import Ollama
from langchain.prompts import PromptTemplate
from langchain.chains.llm import LLMChain
from langchain.chains.combine_documents.stuff import StuffDocumentsChain
from langchain.chains import RetrievalQA
from langchain_community.document_loaders import TextLoader
from pypdf import PdfReader
import ollama
import gradio as gr
import os
import json

def triplextract(text, entity_types, predicates):
    input_format = """
        **Entity Types:**
        {entity_types}

        **Predicates:**
        {predicates}

        **Text:**
        {text}
        """

    message = input_format.format(
                entity_types = json.dumps({"entity_types": entity_types}),
                predicates = json.dumps({"predicates": predicates}),
                text = text)

    # Pass the message as a single string
    prompt = message
    output = ollama.generate(model='triplex', prompt=prompt)
    return output

entity_types = ["PERSON", "LOCATION"]
predicates = ["PROFESSION", "BASED_IN"]
   
reader = PdfReader("/home/Ubuntu/myfiles/mypdf.pdf")
text = ""
for page in reader.pages:
    text += page.extract_text() + "\n"
   

prediction = triplextract(text, entity_types, predicates)

response_string = prediction['response'].strip('```json\n').strip()
response_string = response_string.lstrip('\n')
response_string = response_string.strip('```')
response_string = response_string.replace('```', '')
response_string = response_string.replace("json", "")
response_json = json.loads(response_string)
entities_and_triples = response_json['entities_and_triples']
print(entities_and_triples)

with open('output.txt', 'w') as f:
    f.write(text)
   
loader = TextLoader("./output.txt")
docs =loader.load()

       
# Split into chunks
text_splitter = SemanticChunker(HuggingFaceEmbeddings())
documents = text_splitter.split_documents(docs)

# Instantiate the embedding model
embedder = HuggingFaceEmbeddings()

# Create the vector store and fill it with embeddings
vector = FAISS.from_documents(documents, embedder)
retriever = vector.as_retriever(search_type="similarity", search_kwargs={"k": 3})

# Define llm
llm = Ollama(model="mistral")

# Define the prompt
prompt = """
1. Use the following pieces of context to answer the question at the end.
2. If you don't know the answer, just say that "I don't know" but don't make up an answer on your own.\n
3. Keep the answer crisp and limited to 3,4 sentences.

Context: {context}

Question: {question}

Helpful Answer:"""

QA_CHAIN_PROMPT = PromptTemplate.from_template(prompt)

llm_chain = LLMChain(
                  llm=llm,
                  prompt=QA_CHAIN_PROMPT,
                  callbacks=None,
                  verbose=True)

document_prompt = PromptTemplate(
    input_variables=["page_content", "source"],
    template="Context:\ncontent:{page_content}\nsource:{source}",
)

combine_documents_chain = StuffDocumentsChain(
                  llm_chain=llm_chain,
                  document_variable_name="context",
                  document_prompt=document_prompt,
                  callbacks=None)
             
qa = RetrievalQA(
                  combine_documents_chain=combine_documents_chain,
                  verbose=True,
                  retriever=retriever,
                  return_source_documents=True)

def respond(question,history):
    return qa(question)["result"]


gr.ChatInterface(
    respond,
    chatbot=gr.Chatbot(height=500),
    textbox=gr.Textbox(placeholder="Ask me question related to Fahd Mirza", container=False, scale=7),
    title="Fahd's Chatbot",
    examples=["Where Fahd Lives", "Who is Fahd"],
    cache_examples=True,
    retry_btn=None,

).launch(share = True)
Categories: DBA Blogs

Invoice Table Detection with Table Transformer

Andrejus Baranovski - Sun, 2024-07-21 14:02
I show how an open-source transformer model from Microsoft for table detection and structure recognition works. The code is integrated into Sparrow Parse and runs on a local CPU. This approach helps to crop the table area first and then get coordinates for the table cells. Each cell can be cropped and text can be extracted with OCR. This allows retaining the original table structure and reporting the result in JSON or CSV formats. The data extraction part is not in this video; this will be the topic for the next video.

 

GraphRAG Replacement - SciPhi Triplex - Step by Step Local Installation

Pakistan's First Oracle Blog - Sat, 2024-07-20 20:45

 This video installs Triplex which is a finetuned version of Phi3-3.8B for creating knowledge graphs from unstructured data developed by SciPhi.AI. It works by extracting triplets.



Code:

conda create -n triplex python=3.11 -y && conda activate triplex

pip install torch transformers accelerate

import json
from transformers import AutoModelForCausalLM, AutoTokenizer

def triplextract(model, tokenizer, text, entity_types, predicates):

    input_format = """
        **Entity Types:**
        {entity_types}

        **Predicates:**
        {predicates}

        **Text:**
        {text}
        """

    message = input_format.format(
                entity_types = json.dumps({"entity_types": entity_types}),
                predicates = json.dumps({"predicates": predicates}),
                text = text)

    messages = [{'role': 'user', 'content': message}]
    input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt = True, return_tensors="pt").to("cuda")
    output = tokenizer.decode(model.generate(input_ids=input_ids, max_length=2048)[0], skip_special_tokens=True)
    return output

model = AutoModelForCausalLM.from_pretrained("sciphi/triplex", trust_remote_code=True).to('cuda').eval()
tokenizer = AutoTokenizer.from_pretrained("sciphi/triplex", trust_remote_code=True)

entity_types = [ "LOCATION", "POSITION", "DATE", "CITY", "COUNTRY", "NUMBER" ]
predicates = [ "POPULATION", "AREA" ]
text = """
San Francisco,[24] officially the City and County of San Francisco, is a commercial, financial, and cultural center in Northern California.

With a population of 808,437 residents as of 2022, San Francisco is the fourth most populous city in the U.S. state of California behind Los Angeles, San Diego, and San Jose.
"""

prediction = triplextract(model, tokenizer, text, entity_types, predicates)
print(prediction)


entity_types = ["CASE", "LAWYER", "DATE"]
predicates = ["VERDICT", "CHARGES"]
text = """
In the landmark case of Roe v. Wade (1973), lawyer Sarah Weddington successfully argued before the US Supreme Court, leading to a verdict that protected women's reproductive rights.
"""
prediction = triplextract(model, tokenizer, text, entity_types, predicates)
print(prediction)
Categories: DBA Blogs

Error in job of type: executable (window ) (express edition)

Tom Kyte - Sat, 2024-07-20 05:46
I HAVE "Windows 11 Home Single Language". DEVICE NAME: Laptopnum02. NO PASSWORD FOR MY ACCOUNT. 1.CREATE A TEST *.CMD FILE . CREATE A *.TXT THEN CHANGED THE EXTENSION TO: *.CMD THIS MAKES THE *.CMD CALLED: creararchivo.cmd <code>@echo off echo Este es el contenido del archivo creado por demo.cmd. > archivo_creado.txt echo Segunda linea en el archivo creado. >> archivo_creado.txt echo Tercera linea en el archivo creado. >> archivo_creado.txt</code> I DISABLED MY ANTIVIRUS (AVAST) CREATE CREDENTIALS IN sqldeveloper <code>BEGIN DBMS_SCHEDULER.CREATE_CREDENTIAL ( credential_name => 'CREDENCIAL_PRUEBA', username => 'TONYROJAS', password => 'ABC', windows_domain => 'SYSTEM' ); END; CREATE JOB BEGIN DBMS_SCHEDULER.create_job ( job_name => 'PRUEBA', job_type => 'EXECUTABLE', job_action => 'C:\WINDOWS\SYSTEM32\CMD.EXE', number_of_arguments => 2, start_date => SYSTIMESTAMP, repeat_interval => 'FREQ=DAILY; BYHOUR=23', enabled => FALSE, auto_drop => FALSE, comments => 'Job para realizar backup diario' ); -- EJECUTAR COMANDOS DENTRO DEL SIMBOLO DEL SISTEMA DBMS_SCHEDULER.set_job_argument_value ('PRUEBA', 1, '/C'); DBMS_SCHEDULER.set_job_argument_value ('PRUEBA', 2, 'C:\BackupOracle\creararchivo.cmd'); ** DBMS_SCHEDULER.set_attribute('PRUEBA', 'credential_name', 'CREDENCIAL_PRUEBA');** DBMS_SCHEDULER.enable('PRUEBA'); ` END; EXECUTE JOB BEGIN DBMS_SCHEDULER.run_job('PRUEBA'); END; ERROR: Bug Report - ORA-27369: job of type EXECUTABLE failed with exit code: 7 Invalid username or password ORA-06512: in "SYS.DBMS_ISCHED", line 241 ORA-06512: in "SYS.DBMS_SCHEDULER", line 566 Also, when I delete the JOB credential: DELETE THIS PART: DBMS_SCHEDULER.set_attribute('PRUEBA', 'credential_name', 'CREDENCIAL_PRUEBA');</code> And run it again, I get the following error: <code>Bug Report - ORA-27369: job of type EXECUTABLE failed with exit code: 1 Access Denied. Access denied. Access denied.</code> I would like to understand both errors and how to solve it please.
Categories: DBA Blogs

Create Space Shooter Game with Local Free AI Model Mistral Nemo

Pakistan's First Oracle Blog - Sat, 2024-07-20 02:53

 This video show how to create a free local space shooter game with just text prompt using Mistral Nemo LLM.


Full Prompt:

"""
Create a Python program that generates a customizable space shooter game. The game should have the following features:

A player ship that can be customized with a choice of 3 colors and 2 weapon types
3 types of enemies with unique behaviors and difficulty levels
2 power-ups that can be collected by the player
5 levels with increasing difficulty
A boss fight at the end of level 5

The program should use the Pygame library and generate the game assets (sprites, backgrounds, etc.) using AI algorithms. The game should be playable from the terminal.

Customization Options:

Player ship color (red, blue, or green)
Player ship weapon type (laser or missiles)
AI-Generated Assets:
Player ship sprite
Enemy sprites (3 types)
Power-up icons (2 types)
Backgrounds (5 levels)

Gameplay Logic:
Player movement and shooting
Enemy AI behaviors (patrolling, chasing, etc.)
Collision detection and response
Scoring and level progression
Test the AI model's ability to generate a functional and engaging space shooter game with customization options.

"""

Full Game Code:


```python
import pygame
import random

# Initialize Pygame
pygame.init()

# Set up some constants
WIDTH, HEIGHT = 800, 600
FPS = 60
WHITE = (255, 255, 255)
BLACK = (0, 0, 0)
RED = (255, 0, 0)
LIVES = 3

# Set up the display
screen = pygame.display.set_mode((WIDTH, HEIGHT))
pygame.display.set_caption("Space Shooter")
clock = pygame.time.Clock()

# Player class
class Player(pygame.sprite.Sprite):
    def __init__(self):
        super().__init__()
        self.image = pygame.Surface((50, 50))
        self.image.fill(WHITE)
        self.rect = self.image.get_rect()
        self.rect.centerx = WIDTH // 2
        self.rect.bottom = HEIGHT - 10
        self.speed_x = 0

    def update(self):
        self.speed_x = 0
        keys = pygame.key.get_pressed()
        if keys[pygame.K_LEFT]:
            print("Left key pressed")
            self.speed_x = -5
        if keys[pygame.K_RIGHT]:
            print("Right key pressed")
            self.speed_x = 5
        self.rect.x += self.speed_x
        if self.rect.left < 0:
            self.rect.left = 0
        if self.rect.right > WIDTH:
            self.rect.right = WIDTH

    def shoot(self):
        bullet = Bullet(self.rect.centerx, self.rect.top)
        all_sprites.add(bullet)
        bullets.add(bullet)

# Enemy class
class Enemy(pygame.sprite.Sprite):
    def __init__(self):
        super().__init__()
        self.image = pygame.Surface((30, 30))
        self.image.fill(RED)
        self.rect = self.image.get_rect()
        self.rect.x = random.randrange(WIDTH - self.rect.width)
        self.rect.y = random.randrange(-100, -40)
        self.speed_y = random.randrange(1, 8)

    def update(self):
        self.rect.y += self.speed_y
        if self.rect.top > HEIGHT + 10:
            self.rect.x = random.randrange(WIDTH - self.rect.width)
            self.rect.y = random.randrange(-100, -40)
            self.speed_y = random.randrange(1, 8)

# Bullet class
class Bullet(pygame.sprite.Sprite):
    def __init__(self, x, y):
        super().__init__()
        self.image = pygame.Surface((10, 20))
        self.image.fill(WHITE)
        self.rect = self.image.get_rect()
        self.rect.centerx = x
        self.rect.top = y
        self.speed_y = -10

    def update(self):
        self.rect.y += self.speed_y
        if self.rect.bottom < 0:
            self.kill()

# Create sprite groups
all_sprites = pygame.sprite.Group()
enemies = pygame.sprite.Group()
bullets = pygame.sprite.Group()

# Create player instance
player = Player()
all_sprites.add(player)

# Create enemy instances
for i in range(10):
    enemy = Enemy()
    all_sprites.add(enemy)
    enemies.add(enemy)

# Main game loop
running = True
lives = LIVES
while running:
    clock.tick(FPS)

    # Process input (events)
    for event in pygame.event.get():
        if event.type == pygame.QUIT:
            running = False
        elif event.type == pygame.KEYDOWN:
            if event.key == pygame.K_SPACE:
                print("Spacebar pressed")
                player.shoot()

    # Update
    all_sprites.update()

    # Check for bullet-enemy collisions
    hits = pygame.sprite.groupcollide(enemies, bullets, True, True)
    for hit in hits:
        enemy = Enemy()
        all_sprites.add(enemy)
        enemies.add(enemy)

    # Check if player collides with an enemy
    if pygame.sprite.spritecollideany(player, enemies):
        lives -= 1
        print(f"Lives left: {lives}")
        if lives == 0:
            running = False
        else:
            player.rect.centerx = WIDTH // 2
            player.rect.bottom = HEIGHT - 10

    # Draw
    screen.fill(BLACK)
    all_sprites.draw(screen)

    # Flip the display
    pygame.display.flip()

pygame.quit()
Categories: DBA Blogs

Video on OCR and OLR commands in RAC GI/ClusterWare

Hemant K Chitale - Fri, 2024-07-19 21:14

 Last week I published a new video on OCR and OLR commands.

ocrcheck  :  Lists the locations of the OCR and checks for corruption (run as root to check for logical corruption as well)

ocrconfig -add DG Name (e.g. ocrconfig -add +DATA)   :  Adds a new copy of the OCR in the stated ASM DG

ocrconfig -delete DG Name  : Deletes a copy of the OCR from the ASM DG 


cat /etc/oracle/olr.loc :  Shows the location of the OLR

ocrcheck -local : Checks the OLR


ocrconfig -showbackup  :  Shows the default location of OCR backups

ocrconfig -manualbackup  : Create a manual backup of the OCR

(use asmcmd to copy the backup out from ASM to Filesystem)


ocrconfig -local -showbackuploc : Shows the location of OLR backups

ocrconfig -local -manualbackup :  Create a manual backup of the OLR

ocrconfig -local -export  : Create an Export backup of the OLR



Categories: DBA Blogs

file transfer

Tom Kyte - Thu, 2024-07-18 17:06
Hi Tom, I am getting following error when i use copy_file procedure of dbms_file_transfer package. Here i am trying to copy log file from one folder to other. Thanks SQL> BEGIN 2 dbms_file_transfer.copy_file(source_directory_object => 3 'SOURCE_DIR', source_file_name => 'sqlnet.log', 4 destination_directory_object => 'DEST_DIR', 5 destination_file_name => 'sqlnet.log'); 6 END; 7 / BEGIN * ERROR at line 1: ORA-19505: failed to identify file "c:\temp\source\sqlnet.log" ORA-27046: file size is not a multiple of logical block size OSD-04012: file size mismatch (OS 3223) ORA-06512: at "SYS.DBMS_FILE_TRANSFER", line 84 ORA-06512: at "SYS.DBMS_FILE_TRANSFER", line 193 ORA-06512: at line 2
Categories: DBA Blogs

Use GPT-4o Mini Locally with Text and Images

Pakistan's First Oracle Blog - Thu, 2024-07-18 16:54

  This video introduces and shows how to use GPT-4o mini by OpenAI which is quite cost efficient and performant.


Code:

from openai import OpenAI
import base64
import requests
import os

## Set the API key and model name
MODEL="gpt-4o-mini"
os.environ.get('OPENAI_API_KEY')
client = OpenAI(api_key=os.environ.get('OPENAI_API_KEY'))

def encode_image(image_path):
    with open(image_path, "rb") as image_file:
        return base64.b64encode(image_file.read()).decode("utf-8")

IMAGE_PATH="nm.png"
base64_image = encode_image(IMAGE_PATH)

response = client.chat.completions.create(
    model=MODEL,
    messages=[
        {"role": "system", "content": "You are a helpful assistant that responds in Markdown. Help me with this image!"},
        {"role": "user", "content": [
            {"type": "text", "text": "Describe the image? how many girls are there?"},
            {"type": "image_url", "image_url": {
                "url": f"data:image/png;base64,{base64_image}"}
            }
        ]}
    ],
    temperature=0.0,
)

print(response.choices[0].message.content)

-

#pip install -U openai
#export OPENAI_API_KEY=""

from openai import OpenAI
import os

## Set the API key and model name
MODEL="gpt-4o-mini"
os.environ.get('OPENAI_API_KEY')
client = OpenAI(api_key=os.environ.get('OPENAI_API_KEY'))

completion = client.chat.completions.create(
  model=MODEL,
  messages=[
    {"role": "system", "content": "You are a helpful assistant. Help me with my question!"},
    {"role": "user", "content": "A bat and a ball together cost $1.10. The bat costs $1.00 more than the ball. How much does the ball cost?"}  
  ]
)

print("Assistant: " + completion.choices[0].message.content)
Categories: DBA Blogs

Install Mistral Nemo Locally and Test for Multi-Lingual , Function Calling

Pakistan's First Oracle Blog - Thu, 2024-07-18 16:51

 This video installs Mistral NeMo locally and tests it on multi-lingual, math, coding, and function calling.


Code:

conda create -n nemo python=3.11 -y && conda activate nemo

pip install torch
pip install git+https://github.com/huggingface/transformers.git
pip install mistral_inference
pip install huggingface_hub pathlib

from huggingface_hub import snapshot_download
from pathlib import Path

mistral_models_path = Path.home().joinpath('mistral_models', 'Nemo-Instruct')
mistral_models_path.mkdir(parents=True, exist_ok=True)

snapshot_download(repo_id="mistralai/Mistral-Nemo-Instruct-2407", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path)

conda install jupyter -y
pip uninstall charset_normalizer -y
pip install charset_normalizer
jupyter notebook


from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate

from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest

tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
model = Transformer.from_folder(mistral_models_path)

prompt = "Write 10 sentences ending with the word beauty."

completion_request = ChatCompletionRequest(messages=[UserMessage(content=prompt)])

tokens = tokenizer.encode_chat_completion(completion_request).tokens

out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.35, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.decode(out_tokens[0])

print(result)


#===============================
# Function Calling
#===============================

from mistral_common.protocol.instruct.tool_calls import Function, Tool
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate

from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest

completion_request = ChatCompletionRequest(
    tools=[
        Tool(
            function=Function(
                name="get_current_weather",
                description="Get the current weather",
                parameters={
                    "type": "object",
                    "properties": {
                        "location": {
                            "type": "string",
                            "description": "The city and state, e.g. San Francisco, CA",
                        },
                        "format": {
                            "type": "string",
                            "enum": ["celsius", "fahrenheit"],
                            "description": "The temperature unit to use. Infer this from the users location.",
                        },
                    },
                    "required": ["location", "format"],
                },
            )
        )
    ],
    messages=[
        UserMessage(content="What's the weather like today in Paris?"),
        ],
)

tokens = tokenizer.encode_chat_completion(completion_request).tokens

out_tokens, _ = generate([tokens], model, max_tokens=256, temperature=0.35, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.decode(out_tokens[0])

print(result)
Categories: DBA Blogs

Extreme PL/SQL - An Interpreter for a Simple Language

Pete Finnigan - Wed, 2024-07-17 22:26
I talked at a high level a few weeks ago about Extreme PL/SQL and gave a brief look at an interpreter I have been creating for a simple language based on BASIC. I have been keeping notes in a Word....[Read More]

Posted by Pete On 17/07/24 At 12:00 PM

Categories: Security Blogs

Oracle VirtualBox 7.0.20

Tim Hall - Wed, 2024-07-17 01:48

VirtualBox 7.0.20 has been released. The downloads and changelog are in the usual places. I’ve installed it on my Windows 10 and 11 machines with no drama. Vagrant There was no new version of Vagrant since the last VirtualBox release. If you are new to Vagrant and want to learn, you might find this useful. Once you understand that, I … Continue reading "Oracle VirtualBox 7.0.20"

The post Oracle VirtualBox 7.0.20 first appeared on The ORACLE-BASE Blog.Oracle VirtualBox 7.0.20 was first posted on July 17, 2024 at 7:48 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Install Codestral Mamba Locally - Best Math AI Model

Pakistan's First Oracle Blog - Tue, 2024-07-16 17:07

 This video installs Codestral Mamba locally which is an open code model based on the Mamba2 architecture. 



Code: 

conda create -n codestralmamba python=3.11 -y && conda activate codestralmamba

pip install torch huggingface_hub pathlib2

pip install mistral_inference>=1 mamba-ssm causal-conv1d

from huggingface_hub import snapshot_download
from pathlib import Path

mistral_models_path = Path.home().joinpath('mistral_models', 'mamba-codestral-7B-v0.1')
mistral_models_path.mkdir(parents=True, exist_ok=True)

snapshot_download(repo_id="mistralai/mamba-codestral-7B-v0.1", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)

mistral-chat $HOME/mistral_models/mamba-codestral-7B-v0.1 --instruct  --max_tokens 256
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator