The great ocean drive

The Great Ocean Road near Melbourne is one of the most scenic coastal drives in the world. Here are some must-visit spots along the way:


1. Torquay

• Starting point of the Great Ocean Road.

• Famous for surfing, including Bells Beach.

• Visit the Australian National Surfing Museum.


2. Anglesea

• Great for beach walks and spotting kangaroos at the golf course.


3. Aireys Inlet & Split Point Lighthouse

• Stunning coastal views.

• Lighthouse made famous by the TV show Round the Twist.


4. Lorne

• A beautiful coastal town with great cafes and beaches.

• Check out Erskine Falls in the nearby Otway Ranges.


5. Teddy’s Lookout

• Offers breathtaking panoramic views of the ocean and coastline.


6. Kennett River

• One of the best places to see wild koalas.

• Also home to many colorful parrots.


7. Apollo Bay

• A great place for seafood and relaxing on the beach.

• Gateway to the Otway National Park.


8. Great Otway National Park

• Rainforest walks, waterfalls (like Hopetoun Falls), and the famous Redwood Forest.


9. The Twelve Apostles

• The most iconic attraction, featuring massive limestone stacks rising from the ocean.

• Best viewed at sunrise or sunset.


10. Gibson Steps

• Walk down the steps to the beach for an up-close view of the limestone cliffs.


11. Loch Ard Gorge

• A stunning inlet with dramatic cliffs and a tragic shipwreck history.


12. The Grotto

• A naturally formed sinkhole and arch offering a unique viewpoint.


13. London Arch (formerly London Bridge)

• A rock formation that partially collapsed in 1990, leaving a standalone arch.


14. The Bay of Islands

• Less crowded than the Twelve Apostles but equally stunning rock formations.


Would you like recommendations on accommodations or best times to visit?


From Blogger iPhone client

About HPL

The Hamilton Public Library (HPL) is a comprehensive library system serving the city of Hamilton, Ontario, Canada. It operates 23 branches across the city, offering a wide range of services and resources to the community. 


Services and Resources:

• Collections: HPL provides access to a vast array of materials, including books, DVDs, Blu-ray discs, music CDs, audiobooks, e-books, video games, and archival materials related to the region. 

• Digital Resources: Patrons can utilize various online databases, download e-books and audiobooks, and access other digital media through the library’s online platforms.

• Programs and Events: HPL offers programs for children, youth, and adults, such as tutoring services, book clubs, and cultural events. 

• Special Services: The library provides unique services like free museum access with an HPL card, wireless printing at all branches, and real-time homework help for students. 


History:


Public libraries have been a part of Hamilton since the 1830s. In 1889, the city council approved funding for a public library, which opened in February 1890. A new main library, funded by philanthropist Andrew Carnegie, opened in 1913. Over the years, HPL has expanded to include multiple branches, serving various communities within the city. 


Branch Locations:


HPL’s branches are strategically located throughout Hamilton, including areas like Ancaster, Dundas, Stoney Creek, and Waterdown. The Central Library, situated at 55 York Blvd., serves as the main hub. 


For more detailed information on services, branch locations, and operating hours, you can visit the official Hamilton Public Library website: 


From Blogger iPhone client


The Hamilton Public Library (HPL) employs a variety of websites and technological services to enhance its offerings and provide patrons with diverse resources. Key platforms and services include:


Digital Media Platforms:

• OverDrive: HPL utilizes OverDrive to offer a vast collection of eBooks and audiobooks. Patrons can borrow digital titles compatible with various devices, ensuring flexible reading and listening options.

• BiblioBoard: This platform provides access to eBooks from local, Ontario-based, and independent authors, supporting local talent and offering unique reading materials. 

• My Capstone Library: Tailored for youth, this service offers fiction and non-fiction eBooks and audiobooks suitable for students, aiding in educational development. 


Educational and Technological Programs:

• Tech Programs: HPL conducts various technology-related classes, ranging from basic skills like “Using a Mouse” to advanced topics such as “iPad Level 1” and “Photo Editing.” These programs are designed to enhance digital literacy among patrons. 

• Drop-In Tech Help: Patrons can bring their devices, such as phones, tablets, or laptops, to receive assistance with tasks like setting up email or downloading eBooks, fostering a supportive learning environment. 


Technological Infrastructure:

• Integrated Library System (ILS): HPL implemented the Virtua ILS from VTLS, a comprehensive library management system that streamlines cataloging, circulation, and acquisition processes, enhancing overall operational efficiency. 

• RFID Technology: As the first major Canadian library system to install ITG RFID Technology Solutions, HPL has improved inventory management and self-service capabilities, offering patrons a more efficient borrowing experience. 


Community Partnerships:

• City School by Mohawk: In collaboration with City School by Mohawk, HPL is part of the Community Learning Alliance, providing opportunities for individuals seeking careers in technology or aiming to enhance their computer skills. 


Through these platforms and services, the Hamilton Public Library demonstrates a commitment to leveraging technology to meet the evolving needs of its community, promoting digital literacy, and providing access to a wide array of resources.



Challenges


Public libraries, including the Hamilton Public Library (HPL), face several technological challenges, particularly concerning their web services. While specific details about HPL’s internal challenges are not publicly available, common issues affecting public libraries include:

1. Cybersecurity Threats: Libraries are increasingly targeted by cyberattacks, leading to service disruptions and potential data breaches. For instance, the Fort Bend Public Library experienced a significant network disruption affecting its catalog and e-library services, prompting a $2.6 million investment in cybersecurity measures. 

2. Protecting Patron Privacy: With the rise of digital services, safeguarding user privacy has become more complex. A study highlighted that many libraries face challenges in protecting patron privacy due to a lack of training and technical knowledge. 

3. Funding Constraints: Limited budgets can hinder the adoption of new technologies and the maintenance of existing systems, affecting the quality and availability of web services.

4. Keeping Pace with Technological Advancements: Rapid technological changes require continuous updates to library systems and staff training, which can be resource-intensive.

5. Digital Equity and Access: Ensuring all community members have access to digital resources is a significant challenge. Libraries often serve as digital equity hubs, providing essential services to bridge the digital divide. 

6. Integration of Diverse Digital Platforms: Managing and integrating various digital platforms for e-books, audiobooks, and other online resources can be complex and may lead to inconsistent user experiences.

7. Website Usability: Designing user-friendly websites that cater to diverse patron needs, including accessibility features, is crucial but challenging.

8. Data Management and Analytics: Effectively collecting, managing, and analyzing data to improve services while respecting privacy concerns requires sophisticated systems and policies.

9. Staff Training and Development: Continuous technological advancements necessitate ongoing staff training to effectively manage and utilize new systems and tools.

10. Community Engagement: Leveraging technology to engage with the community, promote services, and gather feedback is essential but can be challenging to implement effectively.


Addressing these challenges requires strategic planning, adequate funding, staff training, and collaboration with technology partners to ensure that libraries can continue to provide valuable services in the digital age.


History of Ebrary

eBrary was a digital content platform that provided access to a vast collection of academic eBooks. It was widely used by universities, libraries, and research institutions, offering books across various disciplines, including science, technology, business, humanities, and medicine.


Key Features of eBrary:

• Subscription-Based Access: Institutions could subscribe to eBrary’s collection, granting their users access to thousands of eBooks.

• Download & Offline Reading: Users could read books online or download them for offline access.

• Highlighting & Note-Taking: Integrated tools allowed users to highlight text, take notes, and collaborate with others.

• Full-Text Search: Advanced search features made it easier to find relevant content within books.


Acquisition & Transition


eBrary was acquired by ProQuest in 2011. In 2017, ProQuest merged eBrary with another platform, Ebook Central, which became its primary eBook service. Ebook Central retained many of eBrary’s features while expanding its catalog and improving user experience.


If you’re looking for eBrary-like services today, ProQuest Ebook Central is the closest successor. It’s widely used by academic institutions for research and learning. Would you like recommendations for alternatives?


From Blogger iPhone client

E Library Subscription Systems

Yes, there are several book subscription services that function like Netflix, offering unlimited access to a library of books for a monthly fee. Some of the most popular options include:

1. Kindle Unlimited (Amazon) – Provides access to over a million eBooks, audiobooks, and magazines for a monthly subscription. However, it does not include all Kindle books.

2. Scribd – Offers unlimited access to eBooks, audiobooks, magazines, and even documents for a monthly fee. It has a wide selection across multiple genres.

3. Kobo Plus – Similar to Kindle Unlimited, Kobo Plus provides unlimited access to eBooks and audiobooks for a flat monthly fee.

4. Audible Plus – If you’re interested in audiobooks, Audible Plus gives unlimited listening access to a curated selection of audiobooks and podcasts.

5. 24symbols – A lesser-known subscription service offering unlimited access to a vast library of eBooks in multiple languages.

6. Oyster (Discontinued) – Oyster was a popular “Netflix for books” service, but it was acquired by Google and shut down in 2016.


Would you be looking for something with a specific feature, like academic books, business books, or niche content?


From Blogger iPhone client

Dataplex parameters

In Google Cloud Dataplex, the Catalog is a key component for organizing and managing metadata for data assets. Here’s how Tags, Aspects, and Entity Details are used in the Dataplex Catalog:


1. Tags


Tags in Dataplex Catalog are metadata annotations that help categorize and provide additional context to assets. They are often used for:

• Data classification (e.g., “PII”, “Confidential”)

• Ownership & Governance (e.g., “Finance Team”, “Compliance Required”)

• Quality indicators (e.g., “Verified”, “Needs Review”)


Tags can be assigned at different levels, such as tables, files, and entities, to help in searching, filtering, and managing metadata effectively.




2. Aspects


Aspects represent specific metadata categories or attributes that describe a data entity. They help structure metadata into different dimensions. Examples include:

• Technical aspects (e.g., schema, data format)

• Business aspects (e.g., data owner, usage policies)

• Operational aspects (e.g., freshness, update frequency)


Aspects provide a structured way to enrich metadata, making it easier to discover and manage assets in Dataplex.




3. Entity Details


An Entity in the Dataplex Catalog represents a logical abstraction of a data asset. The Entity Details include:

• Type: Table, File, Stream, etc.

• Location: Cloud Storage, BigQuery, or another storage system

• Schema: Columns, data types, and descriptions

• Lineage & Relations: Connections to other datasets


Entity details help in data discovery, governance, and integration across different Google Cloud services.


Would you like to explore how these concepts fit into your data governance strategy?


From Blogger iPhone client

Bloomberg REST API

In Bloomberg Terminal, the Instrument Reference Data section contains fields that provide detailed metadata about financial instruments, including oil prices. To find the relevant Bloomberg fields for oil prices, you can look for Crude Oil Futures, Spot Prices, or Benchmarks under commodities.


Some commonly used Bloomberg fields for oil prices include:

1. Spot Prices:

• COA COMDTY (Brent Crude Oil)

• CL1 COMDTY (WTI Crude Oil Front Month)

• CO1 COMDTY (Brent Crude Front Month)

2. Futures Contracts:

• CL FUT (WTI Crude Oil Futures)

• CO FUT (Brent Crude Oil Futures)

3. Historical Data Fields:

• PX_LAST (Last Price)

• PX_OPEN (Opening Price)

• PX_HIGH (High Price)

• PX_LOW (Low Price)

4. Other Key Reference Data Fields:

• ID_BB_GLOBAL (Bloomberg Global Identifier)

• ID_ISIN (International Securities Identification Number)


To retrieve oil prices in Bloomberg Field Discovery, you can:

1. Search for Crude Oil or Commodities under “Instrument Reference Data.”

2. Use getData for real-time or snapshot data.

3. Use getHistory for historical price trends.


Would you like help with writing a Python script to pull oil price data from Bloomberg?


From Blogger iPhone client

Data Fabric as an Architecture

Data Fabric: Tools, Vendors, and Implementation Details


Data Fabric is an advanced data architecture that integrates, governs, and manages data across various environments, including on-premises, cloud, hybrid, and multi-cloud setups. It helps organizations break down data silos, enable real-time analytics, and enhance AI-driven decision-making.




1. Key Components of Data Fabric


a. Data Integration & Connectivity

• Connects disparate data sources (databases, data lakes, APIs, streaming data, etc.).

• Supports ETL (Extract, Transform, Load), ELT (Extract, Load, Transform), and data virtualization.

• Examples: Apache NiFi, Talend, Informatica


b. Metadata Management & Data Cataloging

• AI-driven metadata scanning for data lineage, classification, and discovery.

• Enables self-service access to trusted data.

• Examples: Collibra, Alation, IBM Watson Knowledge Catalog


c. Data Governance & Security

• Ensures data privacy, access controls, encryption, and regulatory compliance (GDPR, HIPAA, CCPA).

• Enables policy-based governance across distributed environments.

• Examples: Immuta, Privacera, Informatica Axon


d. Real-Time Data Processing & Streaming

• Supports streaming data ingestion for real-time analytics and event-driven architectures.

• Examples: Apache Kafka, Confluent, Google Dataflow


e. AI/ML-Driven Automation & Data Orchestration

• Uses AI/ML for automated data integration, quality checks, and optimization.

• Helps in predictive analytics, anomaly detection, and intelligent data workflows.

• Examples: IBM Cloud Pak for Data, DataRobot, Databricks




2. Top Data Fabric Vendors & Platforms


a. IBM Cloud Pak for Data

• AI-powered data fabric solution that integrates data from various sources.

• Provides metadata management, governance, and automated data pipelines.

• Best for enterprises needing AI-driven automation.


b. Informatica Intelligent Data Management Cloud (IDMC)

• A cloud-native platform for data integration, quality, governance, and security.

• Offers low-code automation for data engineering and analytics.

• Best for hybrid and multi-cloud data strategies.


c. Talend Data Fabric

• End-to-end data integration and governance platform.

• Features self-service data discovery, data quality, and security controls.

• Best for organizations needing data trust and compliance.


d. Microsoft Azure Purview

• Enterprise-wide data governance and compliance solution.

• Integrated with Azure Synapse, Power BI, and AI/ML services.

• Best for organizations already using Microsoft ecosystem.


e. AWS Data Lake & Glue

• AWS Glue provides serverless data integration and ETL capabilities.

• AWS Lake Formation helps create a centralized data catalog.

• Best for companies with AWS-based infrastructure.


f. Google Cloud Dataplex

• Unifies data lakes, data warehouses, and AI services under a single fabric layer.

• Provides metadata-driven management and security.

• Best for businesses using Google Cloud AI and BigQuery.




3. Implementation Roadmap for Data Fabric


Step 1: Define Business Objectives

• Identify key use cases (customer 360, fraud detection, predictive analytics, etc.).

• Assess data sources, integrations, and compliance needs.


Step 2: Build a Unified Metadata Layer

• Implement data cataloging and governance (Collibra, Alation, Informatica).

• Enable automated metadata extraction and tagging.


Step 3: Implement Data Integration & Connectivity

• Use ETL, ELT, or data virtualization to connect disparate data sources.

• Deploy real-time streaming for dynamic data processing.


Step 4: Apply AI/ML for Automation

• Use AI to detect patterns, automate quality checks, and optimize data workflows.

• Implement recommendation engines for business users.


Step 5: Establish Security & Compliance

• Define role-based access control (RBAC) and data masking.

• Automate compliance reporting and auditing.


Step 6: Enable Self-Service Analytics

• Provide business users and data scientists self-service access to trusted data.

• Integrate with BI tools (Power BI, Tableau, Looker, etc.).




4. Benefits of Data Fabric


✔ Breaks Data Silos → Provides a unified view of enterprise data.

✔ Improves Data Quality & Trust → AI-powered metadata and governance.

✔ Enables Real-Time Analytics → Supports batch and streaming data.

✔ Enhances Security & Compliance → Ensures data privacy and regulatory adherence.

✔ Accelerates AI/ML Adoption → Provides ready-to-use, high-quality datasets.


Would you like recommendations based on your specific cloud provider or industry?


From Blogger iPhone client

Data Quality frameworks

There are several open-source data quality frameworks that can help with profiling, validation, and monitoring of data. Here are some of the top options:


1. Great Expectations

• GitHub: https://github.com/great-expectations/great_expectations

• Features: Data validation, profiling, and documentation with expectations and tests.

• Best For: Teams needing data quality pipelines in Python.

• Integration: Works with Pandas, Spark, SQL databases, and cloud storage.


2. Deequ (by AWS)

• GitHub: https://github.com/awslabs/deequ

• Features: Data profiling, constraints validation, anomaly detection.

• Best For: Large-scale data validation using Apache Spark.

• Integration: Works well in AWS environments and big data platforms.


3. Monte Carlo (Open Source Part: Anomalo)

• GitHub: No open-source repo, but alternatives exist like Soda SQL.

• Features: Automated anomaly detection and observability.

• Best For: Data engineering teams looking for anomaly detection in data pipelines.


4. Soda SQL

• GitHub: https://github.com/sodadata/soda-sql

• Features: SQL-based data monitoring, anomaly detection, and validation.

• Best For: Teams using SQL-based data warehouses like Snowflake, BigQuery, and Redshift.


5. CloudDQ (by Google)

• GitHub: https://github.com/GoogleCloudPlatform/cloud-data-quality

• Features: Data quality rules for Google BigQuery.

• Best For: Google Cloud users needing automated data quality validation.


6. OpenMetadata

• GitHub: https://github.com/open-metadata/OpenMetadata

• Features: Data discovery, metadata management, lineage tracking, and quality checks.

• Best For: Enterprises managing metadata and governance.


Would you like recommendations based on a specific tech stack or business use case?


From Blogger iPhone client

SAP ERP raw tables

2. Understanding Field Purposes:


To comprehend the purpose of each field:

• Data Elements: Each field in a table is associated with a data element, which provides semantic meaning. By examining the data element (found in the ROLLNAME field of DD03L), you can refer to DD04T to obtain descriptions and gain insights into the field’s purpose.


3. Implementing Data Sensitivity Classification:


SAP provides mechanisms to classify data based on sensitivity:

• Data Protection Workbench: This tool allows you to identify and classify personal data within your SAP system. By navigating to the Data Protection tab, you can assign sensitivity levels to various data entities. 

• Custom Classification: For fields not covered by standard classifications, you can implement custom classifications by:

• Reviewing field descriptions and data elements to assess the nature of the data.

• Collaborating with business process owners to determine the sensitivity of specific data points.

• Utilizing SAP Information Lifecycle Management (ILM) to define and enforce data retention and protection policies.


4. External Resources and Tools:


Several external resources can aid in understanding SAP table structures and data classifications:

• SAP Community Discussions: Engage with the SAP community to explore discussions on extracting table structures and field information. 

• Third-Party Documentation: Websites like SAP Tables provide descriptions for SAP’s ERP systems, which can be instrumental in understanding table and field purposes. 


5. Considerations:

• Access Permissions: Ensure you have the necessary authorizations to access and extract metadata from the SAP system.

• Data Security: Handle extracted metadata with care, especially if it includes sensitive information, to maintain compliance with data protection regulations.


By following these steps, you can systematically compile a list of SAP ERP tables and columns, understand their purposes, and classify their sensitivity to bolster data security and compliance efforts.


From Blogger iPhone client

Oracle fusion erp raw tables

Oracle Fusion Financials utilizes a comprehensive set of tables to manage and store financial data across its various modules. Below is an overview of key tables associated with each financial module:


General Ledger (GL):

• GL_LEDGERS: Stores information about the ledgers defined in the system.

• GL_CODE_COMBINATIONS: Contains the chart of accounts and segment values.

• GL_JE_HEADERS: Holds journal entry header information.

• GL_JE_LINES: Contains journal entry line details.


Accounts Payable (AP):

• AP_INVOICES_ALL: Stores supplier invoice information.

• AP_SUPPLIERS: Contains supplier details and contact information.

• AP_PAYMENTS_ALL: Records payments made to suppliers.

• AP_PAYMENT_SCHEDULES_ALL: Manages payment schedule information.


Accounts Receivable (AR):

• AR_INVOICES_ALL: Stores customer invoice information.

• AR_CUSTOMERS: Contains customer details and contact information.

• AR_PAYMENTS_ALL: Records payments received from customers.

• AR_PAYMENT_SCHEDULES_ALL: Manages payment schedule information.


Cash Management (CM):

• CE_BANK_ACCOUNTS: Stores bank account information.

• CE_STATEMENTS: Contains bank statement details.

• CE_RECONCILIATION_HEADERS: Manages bank reconciliation header information.


Fixed Assets (FA):

• FA_ASSET_HISTORY: Records asset transaction history.

• FA_ADDITIONS_B: Contains information about newly added assets.

• FA_BOOKS: Stores asset book information.

• FA_DEPRN_DETAIL: Manages asset depreciation details.


Expense Management (EM):

• EXM_EXPENSE_REPORTS: Stores employee expense report information.

• EXM_EXPENSE_ITEMS: Contains individual expense item details.

• EXM_EXPENSE_APPROVERS: Manages expense report approver information.


For a comprehensive and detailed list of all tables, including their descriptions and relationships, it is recommended to consult the official Oracle Fusion Cloud Financials documentation. Oracle provides extensive resources that detail the tables and views for each module, which can be accessed through their official documentation. 


Additionally, the Oracle community forums and customer connect discussions can be valuable resources for specific inquiries and shared experiences related to Oracle Fusion Financials tables. 


Please note that access to certain tables and data may require appropriate permissions within your Oracle Fusion Financials implementation.


From Blogger iPhone client



Oracle Fusion Financials comprises numerous tables across its modules, each containing various columns that store specific data. Understanding the purpose and sensitivity of these columns is crucial for effective data management and compliance. Below is an overview of key tables, their columns, purposes, and data sensitivity considerations:


General Ledger (GL):

1. GL_JE_HEADERS: Stores journal entry header information.

• Columns:

• JE_HEADER_ID: Primary key for journal entries.

• JE_BATCH_ID: Identifier linking to the journal batch.

• STATUS: Indicates the approval status of the journal entry.

• Sensitivity: Generally low; however, the STATUS column may indicate internal financial processes.

2. GL_JE_LINES: Contains detailed journal entry line information.

• Columns:

• JE_LINE_ID: Primary key for journal entry lines.

• ACCOUNTED_DR: Debit amount in the accounted currency.

• ACCOUNTED_CR: Credit amount in the accounted currency.

• Sensitivity: Medium; financial amounts should be protected to prevent unauthorized access.


Accounts Payable (AP):

1. AP_INVOICES_ALL: Stores supplier invoice information.

• Columns:

• INVOICE_ID: Primary key for invoices.

• VENDOR_ID: Identifier for the supplier.

• INVOICE_AMOUNT: Total amount of the invoice.

• Sensitivity: High; contains financial transaction details and supplier information.

2. AP_SUPPLIERS: Contains supplier details.

• Columns:

• SUPPLIER_ID: Primary key for suppliers.

• SUPPLIER_NAME: Name of the supplier.

• TAXPAYER_ID: Supplier’s tax identification number.

• Sensitivity: High; includes personally identifiable information (PII) such as tax IDs.


Accounts Receivable (AR):

1. AR_INVOICES_ALL: Stores customer invoice information.

• Columns:

• INVOICE_ID: Primary key for customer invoices.

• CUSTOMER_ID: Identifier for the customer.

• INVOICE_AMOUNT: Total amount billed to the customer.

• Sensitivity: High; contains financial transaction details and customer information.

2. AR_CUSTOMERS: Contains customer details.

• Columns:

• CUSTOMER_ID: Primary key for customers.

• CUSTOMER_NAME: Name of the customer.

• CONTACT_NUMBER: Customer’s contact information.

• Sensitivity: High; includes PII such as contact details.


Data Sensitivity and Security Measures:


Oracle Fusion Applications implement several measures to protect sensitive data:

• Masking and Encryption: Sensitive fields in application user interfaces are masked to prevent unauthorized viewing. Encryption APIs are utilized to protect data during transmission and storage. 

• Data Classification: Oracle Data Safe provides predefined sensitive types categorized under Identification Information, Financial Information, and more. This classification aids in identifying and securing sensitive columns. 

• Data Masking: In non-production environments, data masking techniques are applied to scramble sensitive data, ensuring that it is not exposed during development or testing phases. 


For a comprehensive understanding of table structures, columns, and their purposes, consulting the official Oracle Fusion Financials documentation is recommended. This resource provides detailed descriptions of tables and columns, aiding in effective data management and security implementation. 


By leveraging these resources and implementing robust data security measures, organizations can ensure the confidentiality and integrity of their financial data within Oracle Fusion Financials.


Bridging Application

If you’re looking for open-source platforms to bridge Azure DevOps with another web application (for mapping and bridging data, workflows, or integrations), here are several options:




1. Integration Platforms (iPaaS & Middleware)


These platforms help connect Azure DevOps with other web apps via APIs, event-driven workflows, or data synchronization.


A. Apache Camel

• Use Case: Enterprise integration using prebuilt connectors.

• How it Helps: Supports Azure DevOps API and other systems like Jira, GitHub, Slack, and databases.

• Link: https://camel.apache.org/


B. N8n (Workflow Automation Tool)

• Use Case: Automate workflows between Azure DevOps and other apps without heavy coding.

• How it Helps: Allows for low-code automation with built-in API integration.

• Link: https://n8n.io/


C. OpenIntegrationHub

• Use Case: Open-source iPaaS for real-time data synchronization.

• How it Helps: Bridges Azure DevOps with CRMs, ERPs, or other cloud apps.

• Link: https://openintegrationhub.org/




2. API Gateways & Middleware for Bridging Services


If you need to expose Azure DevOps to another web app via APIs, these platforms help:


D. Kong API Gateway

• Use Case: API management and middleware for Azure DevOps.

• How it Helps: Acts as an API proxy to expose Azure DevOps services securely to another app.

• Link: https://konghq.com/kong/


E. WSO2 API Manager

• Use Case: Create, secure, and manage APIs that connect Azure DevOps with other web apps.

• How it Helps: Offers API transformation and security features.

• Link: https://wso2.com/api-management/




3. Event-Driven Platforms for Real-Time Sync


For event-driven architecture and real-time syncing:


F. Apache Kafka

• Use Case: Streaming events between Azure DevOps and other apps.

• How it Helps: Enables asynchronous processing of DevOps data in real time.

• Link: https://kafka.apache.org/


G. Apache Nifi

• Use Case: Low-code tool for moving data between Azure DevOps and other systems.

• How it Helps: Drag-and-drop UI for connecting multiple systems.

• Link: https://nifi.apache.org/




4. Scripting & Automation Tools


If you need a lightweight solution without full middleware:


H. GitHub Actions + Azure DevOps

• Use Case: Automate workflows between Azure DevOps and other apps.

• How it Helps: Allows event-based automation via GitHub Webhooks.

• Link: https://github.com/features/actions


I. Python (FastAPI, Flask) with Azure DevOps API

• Use Case: Custom integration script to map Azure DevOps data to another web app.

• How it Helps: Use Azure DevOps REST API to interact with work items, pipelines, and repositories.

• Docs: https://learn.microsoft.com/en-us/rest/api/azure/devops/




Which One is Best for You?

• For No-Code Integration → N8n

• For API Middleware → Kong API Gateway

• For Event-Based Bridging → Apache Kafka

• For Custom Scripting → FastAPI + Azure DevOps API


Do you have a specific use case in mind? I can help suggest the best approach based on your needs.


From Blogger iPhone client

Tableau useful links

https://chatgpt.com/canvas/shared/67d136b36000819185c8aabaa14f71ac

From Blogger iPhone client

Dragster

Dagster is an open-source data orchestrator designed for building, running, and monitoring data pipelines. It helps manage complex data workflows by ensuring reliability, observability, and modularity. Unlike traditional workflow schedulers, Dagster treats data pipelines as software assets, emphasizing testing, type safety, and version control.


Key Features:

• Declarative Pipeline Definition: Uses Python to define workflows as directed acyclic graphs (DAGs) of computations.

• Modularity and Reusability: Allows breaking down pipelines into reusable components.

• Observability & Monitoring: Provides built-in logging, metrics, and dashboards to track job execution.

• Integration Support: Works with tools like dbt, Apache Spark, Snowflake, and cloud storage.

• Local & Cloud Execution: Can run jobs locally or in cloud environments like AWS, GCP, or Kubernetes.


Dagster is a good choice for organizations looking to improve data pipeline reliability while maintaining flexibility in development and deployment. Would you like insights on how it compares to other orchestrators like Apache Airflow or Prefect?


From Blogger iPhone client

Data Fabric

End-to-End Overview of Data Fabric


1. What is Data Fabric?


Data fabric is an architectural approach that enables seamless, real-time, and intelligent data management across a distributed ecosystem. It unifies disparate data sources (on-prem, cloud, hybrid) into a connected data layer with automation, governance, and real-time access.


2. Key Components of Data Fabric

• Data Integration & Virtualization: Connects diverse data sources across multiple environments (SQL, NoSQL, cloud storage, etc.).

• Metadata Management & Cataloging: Establishes a unified view with active metadata (data lineage, relationships, etc.).

• Data Governance & Security: Enforces access controls, policies, and compliance standards.

• AI & Automation: Uses AI/ML to automate data discovery, classification, and optimization.

• Data Orchestration & Pipelines: Ensures efficient movement, transformation, and processing of data.


3. End-to-End Data Fabric Lifecycle


Step 1: Data Discovery & Connectivity

• Identify and connect structured, semi-structured, and unstructured data sources (databases, SaaS apps, APIs, files).

• Leverage metadata-driven discovery to map relationships across different data assets.


Step 2: Data Integration & Unification

• Implement data virtualization to enable real-time access without data duplication.

• Use ETL (Extract, Transform, Load) or ELT pipelines to consolidate data where necessary.


Step 3: Data Governance & Security

• Apply role-based access control (RBAC), encryption, and compliance policies (GDPR, HIPAA).

• Maintain data lineage and audit trails to ensure regulatory compliance.


Step 4: AI-Driven Data Insights & Self-Service

• Utilize AI/ML for automated tagging, data quality checks, and anomaly detection.

• Enable self-service analytics via a unified data catalog for business users.


Step 5: Data Processing & Analytics

• Provide a semantic layer for querying across distributed sources.

• Support real-time data streaming (Kafka, Spark) and batch processing (Hadoop, Snowflake, BigQuery).


Step 6: Continuous Optimization & Monitoring

• Implement observability and performance monitoring for data pipelines.

• Use AI-driven recommendations for cost efficiency and query optimization.


4. Benefits of Data Fabric


✔ Unified Data Access: Single source of truth across hybrid environments.

✔ Faster Insights: Reduces time spent on data integration and preparation.

✔ Stronger Governance & Compliance: Centralized controls for security and privacy.

✔ Scalability & Flexibility: Adapts to cloud-native and hybrid infrastructures.


Would you like a technical deep dive on specific aspects, such as tools, implementation strategies, or best practices?


From Blogger iPhone client

Professional Trucking in USA and Canada

Getting a trucking license (Commercial Driver’s License or CDL) in Canada and the USA involves several steps. Here’s a general guide for both countries:




1. Meet the Basic Requirements


Before applying, ensure you meet these conditions:

✅ Age Requirement

• Canada: Minimum 18 years old (for intra-provincial driving) or 21 years old (for inter-provincial or US-bound driving).

• USA: Minimum 18 years old (for intrastate driving) or 21 years old (for interstate and hazardous materials).


✅ Valid Driver’s License

• You must have a valid non-commercial driver’s license (Class 5 in Canada, regular driver’s license in the USA).


✅ Medical Examination

• A medical fitness exam is required to ensure you’re physically fit to drive a commercial truck.

• In the USA, you must get a DOT (Department of Transportation) medical card.


✅ Clean Driving Record

• Some provinces/states require a record free from major violations.




2. Choose the Right CDL/Class of License


There are different commercial license types based on what kind of truck you plan to drive:


Canada (Truck License Classes)

• Class 1: Allows you to drive semi-trucks (tractor-trailers).

• Class 3: Allows you to drive straight-body trucks.

• Class 5: Needed before upgrading to Class 1 or 3.


USA (CDL Classes)

• Class A: Allows you to drive tractor-trailers and combination vehicles.

• Class B: For straight trucks, buses, and dump trucks.

• Class C: For small commercial vehicles (hazardous materials, passenger vans).




3. Enroll in a Truck Driving School (Optional but Recommended)

• Mandatory in Canada: Provinces like Ontario, Alberta, and BC require Mandatory Entry-Level Training (MELT) before taking the test.

• Recommended in the USA: While not always required, most trucking companies prefer drivers with formal training.


Costs range from $3,000 – $10,000 depending on the school and location.




4. Get a Commercial Learner’s Permit (CLP)


Before taking the full CDL test, you must first obtain a learner’s permit:

• Pass a written knowledge test (covering trucking laws, road signs, and safety).

• Some states/provinces may require a vision test.

• Hold the learner’s permit for a minimum period (e.g., 14 days in the USA, varies in Canada) before taking the road test.




5. Take the Road Test for Your CDL


Once you’re ready, schedule your road test, which includes:

✅ Pre-trip inspection (Checking truck safety features).

✅ Basic control test (Reversing, turning, parking).

✅ On-road driving test (Driving under real traffic conditions).


If you pass, you’ll receive your CDL/Truck Driver’s License!




6. Additional Endorsements (Optional but Useful)


To increase job opportunities, you can get endorsements for specialized trucking jobs:

• Hazardous Materials (H)

• Tanker Trucks (N)

• Double/Triple Trailers (T)


Some of these require additional testing and background checks.




7. Apply for Trucking Jobs


Once licensed, you can start applying for entry-level trucking jobs or join a company’s paid training program. Many large trucking companies help with licensing and job placement.




Bonus: Can I Get a Trucking License Fast?


✅ Fast-track programs exist, allowing you to get a CDL in 4–8 weeks if you take intensive training.

✅ Some trucking companies offer company-sponsored CDL training, which reduces upfront costs.


Would you like help finding a truck driving school near you?


From Blogger iPhone client

Skilled Labour Salaries across USA and Canada

In Canada and the USA, skilled labor professionals can earn high salaries, especially in trades that require specialized expertise, experience, and certifications. The highest-paid skilled labor professionals typically include:


1. Canada

• Powerline Technicians: CAD $80,000 – $150,000 per year

• Industrial Electricians: CAD $75,000 – $130,000 per year

• Elevator Mechanics: CAD $80,000 – $140,000 per year

• Heavy Equipment Technicians: CAD $70,000 – $120,000 per year

• Welders (Specialized, e.g., underwater welding): CAD $80,000 – $200,000+ per year

• Aircraft Mechanics: CAD $70,000 – $120,000 per year

• Boilermakers: CAD $80,000 – $140,000 per year


2. USA

• Elevator Installers & Repairers: USD $80,000 – $140,000 per year

• Powerline Technicians: USD $80,000 – $140,000 per year

• Industrial Electricians: USD $70,000 – $130,000 per year

• HVAC Technicians (Specialized, e.g., commercial or industrial work): USD $60,000 – $120,000 per year

• Underwater Welders: USD $100,000 – $300,000 per year

• Aircraft Mechanics & Technicians: USD $70,000 – $120,000 per year

• Crane Operators: USD $80,000 – $130,000 per year


Key Factors Affecting Salary:

• Location (cost of living and demand)

• Industry (union vs. non-union jobs)

• Experience and certifications

• Specializations (e.g., underwater welding, high-voltage electrical work)


Among these, underwater welders have the highest earning potential but also face significant risks. Elevator technicians, powerline workers, and industrial electricians also consistently rank among the highest-paid skilled trades.


From Blogger iPhone client

Alteryx publish to tableau should be precision

Based on the screenshot, it looks like you’re using Alteryx to transform data and then write the results to a Tableau output file. The errors shown—particularly “Precision above arrows supported max” (or “Precision above supported max”)—usually mean that at least one numeric field in your workflow exceeds the allowable precision/scale for the output format. In other words, Alteryx is trying to push a decimal value that’s too large or too precise for the Tableau output tool to handle.


Below are some steps and tips to resolve these issues:


From Blogger iPhone client

Data refine layer

When working with incrementally loaded raw tables, the key data modeling strategy should balance performance, storage efficiency, and ease of access. A good generic approach follows these steps:


1. Understand Data Ingestion and Change Patterns

• Identify how data is incrementally loaded (append-only, upserts, or late-arriving updates).

• Determine if timestamps, versioning, or other fields track changes.


2. Implement a Staging Layer

• Maintain a staging table to store incremental raw data before transformation.

• If source data includes updates, consider Change Data Capture (CDC) methods (e.g., merge strategies).


3. Design an Efficient Processing Strategy

• Use an INSERT-ONLY approach when possible to avoid expensive updates.

• For deduplication or upserts, use partitioning + merge strategies instead of full-table scans.

• Consider delta processing (handling only new/changed records).


4. Use a Layered Data Model


A multi-tier architecture is beneficial:

• Raw Layer (Bronze): Stores incremental raw data in its original format (no transformation).

• Processed Layer (Silver): Applies de-duplication, filtering, and joins to create a clean dataset.

• Aggregated Layer (Gold): Optimized for analytics and reporting (pre-aggregations, materialized views).


5. Optimize for Query Performance

• Use partitioning (by time, region, or business logic) for faster lookups.

• Use clustering or indexing on frequently queried columns.

• In cloud data warehouses (Snowflake, BigQuery, Redshift), leverage automatic clustering and pruning.


6. Handle Slowly Changing Dimensions (SCDs)

• For historical tracking, use Type 2 SCD (adding versioned records).

• For real-time updates, use soft deletes or effective dating.


7. Automate & Monitor the Pipeline

• Implement data quality checks (e.g., row counts, null checks).

• Use orchestration tools (Apache Airflow, dbt, Dataflow) to manage incremental loads.

• Track data lineage and auditing for governance.


8. Choose the Right Storage Format

• Use Parquet or Delta Lake for efficient storage and fast queries.

• Leverage streaming ingestion (Kafka, Kinesis) for real-time needs.


Conclusion


A layered approach with incremental processing, partitioning, and optimization ensures scalable and efficient data modeling. The key is to minimize unnecessary updates, efficiently manage history, and optimize for queries.


From Blogger iPhone client

Creating a centralize data pipeline for ADO backlogs across all PODs

Sure! Here’s a polished sales pitch tailored to advocate for creating a Single Source of Truth (SSoT) for all data pipelines sourced from ADO backlogs across different pods:

Sales Pitch: Unlock Efficiency and Trust: A Unified Source for Data Pipelines


The Challenge We Face Today


Our data landscape is growing rapidly. Multiple pods are building and deploying data pipelines in parallel, with tasks tracked in Azure DevOps (ADO) backlogs. While this approach drives velocity, it also introduces fragmentation and visibility gaps:

• Siloed Backlogs: Each pod maintains its own backlog, making it difficult to track cross-pod dependencies and overall progress.

• Pipeline Overlap & Duplication: Without a consolidated view, we risk building redundant pipelines or solving the same problem differently across teams.

• Lack of Traceability: When issues arise, tracing a data pipeline back to its business context and development history is time-consuming.

• Data Quality Concerns: Inconsistent development practices lead to varied data quality, affecting downstream reporting and analytics.


These inefficiencies don’t just slow us down—they erode stakeholder confidence in our data products.

The Solution: A Single Source of Truth for Data Pipelines


We propose a centralized platform—a Single Source of Truth (SSoT)—that aggregates and standardizes information about all data pipelines across pods, directly integrated with ADO backlogs.


What Does This Look Like?

• Centralized Registry: A unified dashboard capturing metadata for every data pipeline (e.g., source system, transformation logic, target systems, SLAs).

• ADO Integration: Automated ingestion of backlog items, linking ADO work items to pipelines, data lineage, and deployment status.

• Pod-Agnostic View: Cross-pod visibility into in-flight and completed pipelines, enabling proactive identification of overlaps or gaps.

• Pipeline Lineage & Traceability: End-to-end visibility from backlog requirement → pipeline development → production deployment → data consumption.

• Standardized Metadata: Enforce minimum metadata standards for every pipeline to ensure consistency and reusability.

Key Benefits

Pain Point

How SSoT Solves It

Duplication & Wasted Effort

Pods see existing pipelines before building new ones, reducing rework.

Visibility Gaps

Leaders and teams see pipeline progress across pods in real-time.

Traceability & Auditability

Rapidly trace data issues back to the backlog item and pipeline owner.

Operational Efficiency

Reduced time spent hunting for pipeline details → Faster problem resolution.

Data Quality & Governance

Standard metadata across pipelines → Consistent development practices → Improved data trust.

Cross-Team Collaboration

Pods leverage each other’s work, fostering reuse and accelerating delivery.

Real Business Impact

• 30-40% reduction in time spent troubleshooting pipeline failures.

• 20-25% faster delivery of new pipelines by eliminating redundant development.

• Improved stakeholder confidence with traceable, well-documented data pipelines.

Call to Action


Let’s invest in building this Single Source of Truth for our data pipelines.

By doing so, we future-proof our data delivery process, empower our teams, and position our organization as a leader in data-driven excellence.



From Blogger iPhone client

Automating excel

Here are 10 AI tools that make Excel seem like a toy: 👇 



1. SheetAI App  

  - Type your request in plain English. 

  - Automates complex tasks in minutes. 

  - Perfect for large-scale analysis. 

  🔗 [https://www.sheetai.app]


If you want more tips and insights about AI, Join - my newsletter that teaches you how to leverage AI 👇100% Free - 


https://lnkd.in/gzTRpdMF

https://lnkd.in/gqqzY6bk

  

2. Arcwise 

  - Integrates AI customized to your business. 

  - Models built directly into spreadsheets. 

  - Boosts efficiency and personalization. 

  🔗 [https://arcwise.app]

  

3. ChatCSV (acquired by Flatfile)  

  - Ask questions directly to your CSV files. 

  - Acts like a personal data analyst. 

  - Simplifies complex queries effortlessly. 

  🔗 [https://www.chatcsv.co]

  

4. Numerous AI 

  - Integrates ChatGPT into Google Sheets. 

  - Simplifies data management and manipulation. 

  - Cost-effective and powerful. 

  🔗 [https://numerous.ai]

  

5. Rows 

  - AI-driven data analysis, summaries, and transformations. 

  - Accelerates spreadsheet creation. 

  - Ideal for quick decision-making. 

  🔗 [https://rows.com/ai]

  

6. Genius Sheets 

  - Connects to internal data using natural language. 

  - Runs instant analysis like never before. 

  - Perfect for real-time insights. 

  🔗 [https://lnkd.in/dVtyX7xb]

  

7. Equals 

  - Start with a blank sheet and gain instant insights. 

  - Ideal for quick, AI-powered analytics. 

  - Reduces manual effort drastically. 

  🔗 [https://equals.com/ai]

  

8. ChartPixel  

  - Creates AI-assisted charts and slides. 

  - Turns raw data into actionable insights. 

  - Saves hours of presentation preparation. 

  🔗 [https://chartpixel.com]

  


Spreadsheets don't have to be tedious anymore. 

Which of these tools are you adding to your workflow? 

Share your thoughts below! 


Bonus Alert 🎁


Free Courses you will regret not taking in 2025 👇


🚀7000+ free courses free access : https://lnkd.in/g_W26d7h


👉Microsoft Power BI

https://lnkd.in/g45MuT-W


👉Deep Learning 

https://lnkd.in/gY7WQe4K


👉Machine Learning

https://lnkd.in/ggA-6-Jh


👉IBM Data Science

https://lnkd.in/gu4RPKwD


👉IBM Data Analysts

https://lnkd.in/gyyJvR2D


👉Data Analytics

https://lnkd.in/g-3tsuKG


👉Google IT support

https://lnkd.in/gh8Gs7XN


👉Cybersecurity

https://lnkd.in/gFZPmX_c


👉IBM Project Manager

https://lnkd.in/d9g-SZsx


👉Google Project Management

https://lnkd.in/dN4Gv65a


👉AI Product Management

https://lnkd.in/dAQcVs3t


👉Meta UI/UX Design:

https://lnkd.in/gjCp7x8E


👉Meta Frontend Developer

https://lnkd.in/gTiGrbAK


👉MERN Stack Developer

https://lnkd.in/dmfer6Ys


👉Generative AI

https://lnkd.in/gXQepmtz


👉Prompt Engineering for

From Blogger iPhone client

Oracle Converting Date Date Time

The difference in results between TO_CHAR and TO_TIMESTAMP in Oracle when filtering by date and time often stems from the data type and internal storage of dates and timestamps. Here’s a breakdown of why they can produce different records:


1. Data Type Matters (DATE vs TIMESTAMP):

• DATE in Oracle stores date and time up to seconds, but it doesn’t include fractional seconds.

• TIMESTAMP includes fractional seconds.

• If you compare TO_CHAR with a formatted string and the underlying column is DATE, it truncates to seconds, so it matches based on the exact string representation.

• If you use TO_TIMESTAMP(), it is trying to match with precision, including fractional seconds if the column is TIMESTAMP.


2. TO_CHAR Behavior:

• When you use TO_CHAR(date_column, 'YYYY-MM-DD HH24:MI:SS'), it converts the date to a string representation in that format.

• This comparison is purely text-based after conversion, so it won’t consider fractional seconds.

• It can match exact HH24:MI:SS, but any fractional seconds are ignored.


3. TO_TIMESTAMP Behavior:

• When you filter with TO_TIMESTAMP(), you are comparing timestamp values.

• If your modified_date column is of type DATE, comparing it with TIMESTAMP can cause implicit type conversion, which might not work as expected.

• If modified_date is TIMESTAMP and has fractional seconds, filtering by TO_TIMESTAMP('05-FEB-25 09:46:56', 'DD-MON-YY HH24:MI:SS') will exclude rows with fractional seconds like 09:46:56.123.


4. Implicit Conversion Issues:

• Oracle might implicitly convert DATE to TIMESTAMP or vice versa when you mix types in comparison.

• This can lead to precision loss or unexpected results.


5. Best Practice:

• If modified_date is DATE type:

WHERE modified_date = TO_DATE('05-FEB-25 09:46:56', 'DD-MON-YY HH24:MI:SS')


• If modified_date is TIMESTAMP type:

WHERE modified_date = TO_TIMESTAMP('05-FEB-25 09:46:56', 'DD-MON-YY HH24:MI:SS')


• If you don’t care about fractional seconds:

WHERE TRUNC(modified_date) = TO_DATE('05-FEB-25', 'DD-MON-YY')



6. When You Use TO_CHAR:

• You are forcing a string comparison, which might work but is slower and can lead to confusion.

• It is not recommended for date filtering.


Would you like help reworking your query for your specific data type?


From Blogger iPhone client

Tableau Server Extract all workbooks attributes

Convert Tableau Server Client Workbooks List to Pandas DataFrame


When you use the Tableau Server Client (TSC) to get all workbooks from the server:

all_workbooks = list(TSC.Pager(server.workbooks))

You get a list of workbook objects. Each object has attributes like id, name, project_name, owner_id, etc.

Convert all_workbooks List to Pandas DataFrame:

import pandas as pd

import tableauserverclient as TSC


# Assuming you already have `all_workbooks` as a list

all_workbooks = list(TSC.Pager(server.workbooks))


# Extracting relevant attributes into a list of dictionaries

workbooks_data = [

  {

    'id': wb.id,

    'name': wb.name,

    'project_name': wb.project_name,

    'owner_id': wb.owner_id,

    'created_at': wb.created_at,

    'updated_at': wb.updated_at,

    'size': wb.size,

    'show_tabs': wb.show_tabs,

    'webpage_url': wb.webpage_url,

  }

  for wb in all_workbooks

]


# Convert to DataFrame

df = pd.DataFrame(workbooks_data)


print(df)

Explanation:

• List comprehension: Extracts key attributes from each WorkbookItem object.

• Attributes commonly used:

• wb.id

• wb.name

• wb.project_name

• wb.owner_id

• wb.created_at

• wb.updated_at

• wb.size

• wb.show_tabs

• wb.webpage_url


You can customize this list based on the attributes you need from the WorkbookItem object.

Sample Output:

         id      name   project_name    owner_id      created_at ... size show_tabs           webpage_url

0 abcd1234efgh5678   Sales Report Finance Project user123456789 2023-10-01 08:00:00 ... 2500   True https://tableau.server/view/...

1 wxyz9876lmno5432 Marketing Data Marketing Group user987654321 2023-11-05 10:30:00 ... 3100   False https://tableau.server/view/...

Key Notes:

• Make sure you import pandas and tableauserverclient.

• This approach is efficient and works well with TSC.Pager() results.

• You can easily export the DataFrame to CSV or Excel:

df.to_csv('tableau_workbooks.csv', index=False)



Would you like help with pagination handling, filtering specific workbooks, or exporting the DataFrame?


From Blogger iPhone client

Automating tableau bulk connection

It is technically possible to use a tool like Selenium to automate the browser‐based creation of a BigQuery connection in Tableau—complete with entering a custom query and performing bulk connection operations—but there are several important caveats to consider:


What You Can Do with Selenium

• Browser Automation:

Selenium (or a similar browser automation tool) can control Chrome (or another browser) to log into Tableau Server or Tableau Cloud, navigate the UI, and simulate the manual steps you’d normally take to create a connection. This means you could script the process of:

• Signing into Tableau.

• Navigating to the data connection or data source creation page.

• Selecting Google BigQuery as the connection type.

• Entering or uploading service account credentials.

• Inserting a custom SQL query.

• Repeating these steps in a loop to handle bulk operations.

• Bulk Operations:

With careful scripting, you can iterate over a list of parameters or queries, effectively automating the creation of multiple connections. This could be useful if you need to deploy many similar connections at once.


Challenges and Considerations

• Brittleness:

UI automation is inherently fragile. Any change to the Tableau web interface (such as layout, element identifiers, or workflow changes) can break your Selenium script. This means you’ll have to invest time in maintaining your automation scripts.

• Lack of Official Support:

Tableau does not officially support UI automation for creating or managing connections. The REST API and Tableau Server Client (TSC) library are the recommended and supported methods for automating Tableau tasks. If those APIs do not expose exactly the functionality you need (for example, the embedding of a custom query in a connection), that might force you to consider UI automation—but keep in mind the risks.

• Authentication & Security:

Automating through the browser may require handling authentication (and possibly multi-factor authentication) in a secure manner. Ensure that any credentials or service account keys are managed securely and not hard-coded in your automation scripts.

• Complexity of Custom Queries:

If your process involves creating custom SQL queries as part of the connection setup, you’ll need to script the logic to input these queries correctly. Any errors in the custom query syntax or its integration into the Tableau UI may not be easily recoverable from an automated script.


Recommended Alternatives

• Tableau REST API / TSC Library:

Before resorting to Selenium, review whether you can accomplish your goal using Tableau’s REST API or the Tableau Server Client library. Although these APIs may not let you “create a connection from scratch” in every detail (especially if you need to embed non-standard elements like a custom query), they are far more stable and supported for bulk operations.

• Hybrid Approach:

In some cases, you might use a combination of API calls (for publishing and updating data sources) and lightweight browser automation to handle any remaining steps that the API cannot cover. This minimizes the parts of the process that rely on brittle UI automation.


In Summary


Yes, you can use Selenium or a similar tool to automate the creation of a BigQuery connection (including entering a custom query and handling bulk connections) by automating browser interactions in Chrome. However, this approach is generally less robust and more error-prone than using the officially supported Tableau REST API or TSC library. If you choose the Selenium route, prepare for additional maintenance and troubleshooting as Tableau’s web interface evolves.


For more details on Tableau’s supported automation methods, see the official Tableau REST API documentation ( ).


From Blogger iPhone client

Kubernetes management

https://rancherdesktop.io/

From Blogger iPhone client

Aviation data

The primary sources for live airline flight data include:

1. ADS-B (Automatic Dependent Surveillance–Broadcast) Networks

• OpenSky Network (Free & Research-Oriented)

• ADS-B Exchange (Unfiltered Global Flight Data)

• FlightAware (Commercial & API Access)

• Flightradar24 (Commercial & API Access)

2. FAA & Government Aviation Feeds

• FAA SWIM (System Wide Information Management) – US-based real-time flight data

• Eurocontrol NM B2B – European air traffic data

3. IATA (International Air Transport Association) APIs

• Offers flight schedules, airline status, and operational data (paid access)

4. Airline & Airport APIs

• Many airlines and airports provide public or commercial APIs for live flight status

5. GDS (Global Distribution Systems)

• Amadeus, Sabre, and Travelport provide airline ticketing and scheduling data


If you’re looking for a commercial-grade solution like Aviation Week, services like FlightAware Firehose, OAG, or Cirium offer comprehensive real-time and historical aviation data. Are you planning to build something aviation-related?


From Blogger iPhone client

Tableau export workbooks

Tableau’s REST API does not natively support exporting workbooks, images, or PDFs directly. However, you can achieve this using a combination of Tableau REST API and the Tableau Server Client (TSC) or the JavaScript API. Here’s how:

1. Export a Tableau Workbook (TWB or TWBX)


You can export a workbook using the REST API by downloading it from Tableau Server:


Endpoint:

GET /api/3.15/sites/{site_id}/workbooks/{workbook_id}/content

Steps:

1. Authenticate using Tableau’s REST API (/auth/signin).

2. Get Site ID & Workbook ID from /sites and /workbooks.

3. Download the Workbook using the content endpoint.


Example using Python:

import requests


TABLEAU_SERVER = "https://your-tableau-server"

TOKEN = "your-auth-token"

SITE_ID = "your-site-id"

WORKBOOK_ID = "your-workbook-id"


url = f"{TABLEAU_SERVER}/api/3.15/sites/{SITE_ID}/workbooks/{WORKBOOK_ID}/content"

headers = {"X-Tableau-Auth": TOKEN}


response = requests.get(url, headers=headers)


if response.status_code == 200:

  with open("workbook.twbx", "wb") as file:

    file.write(response.content)

  print("Workbook downloaded successfully.")

else:

  print("Failed to download workbook:", response.text)

2. Export Image or PDF using REST API


The REST API doesn’t support direct PDF/image export, but you can use the Tableau Views API:


Export Image (PNG)

GET /api/3.15/sites/{site_id}/views/{view_id}/image

Export PDF

GET /api/3.15/sites/{site_id}/views/{view_id}/pdf

Example in Python (Export Image):

VIEW_ID = "your-view-id"

url = f"{TABLEAU_SERVER}/api/3.15/sites/{SITE_ID}/views/{VIEW_ID}/image"

response = requests.get(url, headers=headers)


if response.status_code == 200:

  with open("view.png", "wb") as file:

    file.write(response.content)

  print("Image exported successfully.")

else:

  print("Failed to export image:", response.text)

Example in Python (Export PDF):

url = f"{TABLEAU_SERVER}/api/3.15/sites/{SITE_ID}/views/{VIEW_ID}/pdf"

response = requests.get(url, headers=headers)


if response.status_code == 200:

  with open("view.pdf", "wb") as file:

    file.write(response.content)

  print("PDF exported successfully.")

else:

  print("Failed to export PDF:", response.text)

Alternative: Tableau Server Client (TSC)


Tableau Server Client (TSC) is a Python library that simplifies these operations.


Install it:

pip install tableauserverclient

Example (Download Workbook):

import tableauserverclient as TSC


TABLEAU_SERVER = "https://your-tableau-server"

USERNAME = "your-username"

PASSWORD = "your-password"

SITE_ID = "your-site-id"

WORKBOOK_ID = "your-workbook-id"


server = TSC.Server(TABLEAU_SERVER, use_server_version=True)

auth = TSC.TableauAuth(USERNAME, PASSWORD, SITE_ID)


with server.auth.sign_in(auth):

  workbook = server.workbooks.get_by_id(WORKBOOK_ID)

  server.workbooks.download(workbook.id, filepath="workbook.twbx")

  print("Workbook downloaded.")

Summary

Format

REST API

TSC Python SDK

Workbook (.twb/.twbx)

✅

✅

Image (.png)

✅

❌

PDF

✅

❌

If you’re working with Tableau Public, you can use Tableau’s JavaScript API for embedded views.


Let me know if you need help setting this up!


From Blogger iPhone client

Copilot rest api

GitHub Copilot REST API


GitHub Copilot primarily operates through integrations in IDEs (VS Code, JetBrains, Neovim, etc.), but GitHub does not provide a public REST API for Copilot at this time.


Alternative Options:

1. GitHub Copilot CLI (Experimental)

• GitHub is testing a CLI-based Copilot, which might expose API-like capabilities in the future.

2. Using OpenAI API Instead

• Since GitHub Copilot is built on OpenAI’s Codex model, you can use OpenAI’s GPT API (e.g., gpt-4-turbo) to achieve similar code-generation capabilities.

• Example OpenAI API call using Python:

import openai


response = openai.ChatCompletion.create(

  model="gpt-4-turbo",

  messages=[{"role": "user", "content": "Write a Python function to reverse a string"}]

)

print(response["choices"][0]["message"]["content"])


3. GitHub GraphQL & REST APIs for Repository Actions

• If you want automation related to GitHub repositories, issues, or PRs, you can use:

• GitHub REST API

• GitHub GraphQL API


Would you like help integrating OpenAI’s API as a Copilot alternative?


From Blogger iPhone client

Snowflake and Alteryx

Yes, Snowflake integrates with Alteryx, allowing users to connect, transform, and analyze data seamlessly. Alteryx provides built-in connectors to read from and write to Snowflake, enabling data preparation, blending, and advanced analytics.

How to Connect Alteryx with Snowflake


There are two primary ways to connect Alteryx to Snowflake:


1. Using the Alteryx Snowflake Connector (Recommended)

• Alteryx has a native Snowflake connector that simplifies the integration.

• This method supports bulk loading, query pushdown, and optimized performance.


Steps:

1. Open Alteryx Designer.

2. Drag a “Input Data” tool to the workflow.

3. Select Snowflake as the data source.

4. Enter the connection details:

• Server: <your_snowflake_account>.snowflakecomputing.com

• Database: <your_database>

• Warehouse: <your_compute_warehouse>

• Username & Password: <your_credentials>

5. Choose the table/query you want to use.

6. Click OK to establish the connection.


2. Using ODBC Driver for Snowflake

• If the native connector is not available, Alteryx can connect via Snowflake’s ODBC driver.

• This method provides greater flexibility but may require more setup.


Steps:

1. Install the Snowflake ODBC driver from the Snowflake website.

2. Configure an ODBC Data Source in Windows:

• Open ODBC Data Source Administrator.

• Add a new System DSN.

• Select Snowflake ODBC Driver.

• Enter your Snowflake account details.

3. In Alteryx:

• Drag a “Input Data” tool.

• Choose ODBC as the connection type.

• Select your configured Snowflake DSN.

• Enter a SQL query or select a table.

4. Click OK to connect.

Key Benefits of Using Snowflake with Alteryx


✅ Fast Query Processing – Snowflake’s optimized compute engine speeds up Alteryx workflows.

✅ Pushdown Processing – Alteryx can offload queries to Snowflake for better performance.

✅ Seamless Data Blending – Combine Snowflake data with other sources in Alteryx.

✅ Bulk Loading Support – Large datasets can be written back to Snowflake efficiently.

✅ Secure & Scalable – Snowflake handles enterprise-grade security and scaling automatically.

Common Use Cases

• Data Preparation & Transformation – Load raw data from Snowflake, clean it in Alteryx, and write back transformed data.

• Predictive Analytics & ML – Use Alteryx for advanced modeling while leveraging Snowflake’s storage.

• Business Intelligence (BI) Enablement – Process Snowflake data in Alteryx before sending it to BI tools like Tableau or Power BI.


Would you like a specific example or workflow template for Snowflake-Alteryx integration?


From Blogger iPhone client

Snowflake and spark integration

Yes, you can use Apache Spark and Databricks with Snowflake to enhance data processing and analytics. There are multiple integration methods depending on your use case.

1. Using Apache Spark with Snowflake

• Snowflake provides a Spark Connector that enables bi-directional data transfer between Snowflake and Spark.

• The Snowflake Connector for Spark supports:

• Reading data from Snowflake into Spark DataFrames

• Writing processed data from Spark back to Snowflake

• Query pushdown optimization for performance improvements


Example: Connecting Spark to Snowflake

from pyspark.sql import SparkSession


# Initialize Spark session

spark = SparkSession.builder.appName("SnowflakeIntegration").getOrCreate()


# Define Snowflake connection options

sf_options = {

  "sfURL": "https://your-account.snowflakecomputing.com",

  "sfDatabase": "YOUR_DATABASE",

  "sfSchema": "PUBLIC",

  "sfWarehouse": "YOUR_WAREHOUSE",

  "sfUser": "YOUR_USERNAME",

  "sfPassword": "YOUR_PASSWORD"

}


# Read data from Snowflake into Spark DataFrame

df = spark.read \

  .format("snowflake") \

  .options(**sf_options) \

  .option("dbtable", "your_table") \

  .load()


df.show()

2. Using Databricks with Snowflake


Databricks, which runs on Apache Spark, can also integrate with Snowflake via:

• Databricks Snowflake Connector (similar to Spark’s connector)

• Snowflake’s Native Query Engine (for running Snowpark functions)

• Delta Lake Integration (for advanced lakehouse architecture)


Integration Benefits

• Leverage Databricks’ ML/AI Capabilities → Use Spark MLlib for machine learning.

• Optimize Costs → Use Snowflake for storage & Databricks for compute-intensive tasks.

• Parallel Processing → Use Databricks’ Spark clusters to process large Snowflake datasets.


Example: Querying Snowflake from Databricks

# Configure Snowflake connection in Databricks

sfOptions = {

  "sfURL": "https://your-account.snowflakecomputing.com",

  "sfDatabase": "YOUR_DATABASE",

  "sfSchema": "PUBLIC",

  "sfWarehouse": "YOUR_WAREHOUSE",

  "sfUser": "YOUR_USERNAME",

  "sfPassword": "YOUR_PASSWORD"

}


# Read Snowflake table into a Databricks DataFrame

df = spark.read \

  .format("snowflake") \

  .options(**sfOptions) \

  .option("dbtable", "your_table") \

  .load()


df.display()

When to Use Snowflake vs. Databricks vs. Spark?

Feature

Snowflake

Databricks

Apache Spark

Primary Use Case

Data warehousing & SQL analytics

ML, big data processing, ETL

Distributed computing, real-time streaming

Storage

Managed cloud storage

Delta Lake integration

External (HDFS, S3, etc.)

Compute Model

Auto-scale compute (separate from storage)

Spark-based clusters

Spark-based clusters

ML/AI Support

Snowpark (limited ML support)

Strong ML/AI capabilities

Native MLlib library

Performance

Fast query execution with optimizations

Optimized for parallel processing

Needs tuning for performance

Final Recommendation

• Use Snowflake for structured data storage, fast SQL analytics, and ELT workflows.

• Use Databricks for advanced data engineering, machine learning, and big data processing.

• Use Spark if you need real-time processing, batch jobs, or a custom big data pipeline.


Would you like an example for a specific integration use case?


From Blogger iPhone client

Snowflake data warehouse

What is Snowflake Data?

Snowflake is a cloud-based data platform that provides a fully managed data warehouse-as-a-service (DWaaS). It enables businesses to store, process, and analyze large volumes of structured and semi-structured data efficiently. Unlike traditional on-premises databases, Snowflake is designed for the cloud, offering scalability, performance, and ease of use without requiring infrastructure management.


Key Features of Snowflake:

1. Multi-Cloud Support – Runs on AWS, Azure, and Google Cloud.

2. Separation of Compute and Storage – Allows independent scaling of processing power and storage.

3. Pay-as-You-Go Pricing – Charges based on actual usage.

4. Zero-Copy Cloning – Enables instant duplication of databases without extra storage costs.

5. Automatic Optimization – Handles performance tuning automatically.

6. Multi-Tenancy – Allows multiple users and workloads to run concurrently.


Competitors of Snowflake


Snowflake competes with several cloud-based and on-premises data warehousing solutions, including:


1. Cloud-Based Competitors:

• Google BigQuery – Serverless data warehouse with real-time analytics, integrated with Google Cloud.

• Amazon Redshift – Fully managed data warehouse from AWS, optimized for complex queries.

• Microsoft Azure Synapse Analytics – Combines big data and data warehousing with integrated analytics.

• Databricks – Unified analytics platform built on Apache Spark, optimized for AI/ML and big data.


2. On-Premises & Hybrid Competitors:

• Teradata – High-performance on-premises and hybrid cloud data warehousing solution.

• Oracle Autonomous Data Warehouse – Cloud-based and on-premises data warehouse with automation features.

• IBM Db2 Warehouse – Enterprise data warehouse with AI-powered insights.


3. Open-Source & Alternative Solutions:

• ClickHouse – Open-source columnar database designed for fast analytics.

• Apache Druid – Real-time analytics database for high-speed queries.

• Presto (Trino) – SQL query engine for big data analytics.


How Snowflake Stands Out


Snowflake’s major advantage lies in its simplicity, scalability, and cloud-native architecture, making it easier to use compared to traditional solutions like Teradata or Oracle. However, competitors like Google BigQuery and Amazon Redshift challenge it with their deep cloud integration and cost-effective pricing models.





From Blogger iPhone client

Airline

As of February 2025, JetBlue Airways’ leadership team includes:

• Chief Executive Officer (CEO): Joanna Geraghty

• President: Marty St. George

• Chairman of the Board: Peter Boneparth


Joanna Geraghty was appointed CEO in February 2024, becoming the first woman to lead a major U.S. airline. Marty St. George returned to JetBlue as President in February 2024, overseeing the airline’s commercial functions. Peter Boneparth has served as Chairman since May 2020. 



Jet Blue as a business


JetBlue Airways has established itself as a distinctive player in the airline industry by blending cost-effective operations with enhanced passenger experiences. Here’s an overview of its business model, competitive standing, fleet, and technological advancements:


Business Model and Market Position


JetBlue operates a hybrid model that combines elements of low-cost carriers with services typical of full-service airlines. This approach allows the airline to offer competitive fares while providing value-added amenities. Key aspects of JetBlue’s business model include:

• Customer-Centric Services: Passengers enjoy complimentary in-flight entertainment, free Wi-Fi, and snacks, enhancing the overall travel experience.

• Strategic Route Network: Serving over 100 destinations across the U.S., Caribbean, and Latin America, JetBlue focuses on high-demand markets to maximize efficiency.

• Loyalty Program: The TrueBlue program incentivizes repeat business, contributing significantly to customer retention.


Despite these strengths, JetBlue faces challenges in profitability. The airline has reported losses in recent years, prompting strategic shifts such as reducing unprofitable routes and enhancing premium offerings to attract higher-paying customers. 


Fleet and Technological Advancements


JetBlue’s fleet strategy emphasizes modern, fuel-efficient aircraft to improve operational performance and reduce environmental impact. Notable initiatives include:

• Modern Fleet Composition: The airline operates a young fleet, primarily consisting of Airbus A320 and A321 models, with an average age of approximately 5.5 years. This focus on newer aircraft enhances fuel efficiency and reliability. 

• Sustainable Practices: JetBlue has committed to purchasing sustainable aviation fuel and aims to achieve net-zero carbon emissions by 2040, a decade ahead of industry targets. 

• In-Flight Connectivity: The airline offers free high-speed Wi-Fi on all flights, recognizing the growing importance of connectivity for passengers. 


Competitive Standing


In the competitive airline landscape, JetBlue distinguishes itself through superior customer service and innovative offerings. However, it faces competition from both low-cost carriers and major airlines. To strengthen its market position, JetBlue is:

• Expanding Premium Services: The introduction of the ‘Mint’ business class and plans to open exclusive airport lounges in New York and Boston aim to attract premium travelers. 

• Strategic Partnerships: Codeshare agreements with international airlines expand JetBlue’s network and offer passengers more travel options. 


While JetBlue’s unique approach offers a competitive edge, the airline continues to navigate challenges related to profitability and market share. Ongoing efforts to optimize operations and enhance service offerings are central to its strategy in the evolving aviation industry.



As of early 2025, JetBlue Airways operates a fleet of approximately 290 aircraft, comprising the following types:

• Airbus A320-200: 130 aircraft

• Airbus A321-200: 63 aircraft

• Airbus A321neo: 24 aircraft

• Airbus A220-300: 15 aircraft

• Embraer E190: 48 aircraft


JetBlue is in the process of modernizing its fleet, focusing on enhancing fuel efficiency and passenger comfort. The airline has been phasing out its Embraer E190 aircraft, replacing them with the more efficient Airbus A220-300. Additionally, JetBlue has introduced the Airbus A321 Long Range (A321LR) to support its transatlantic services, featuring 114 seats, including 24 Mint Suites®. 


This strategic fleet renewal aims to improve operational performance and align with JetBlue’s commitment to sustainability.



comparison


As of December 2024, Qatar Airways operates a diverse fleet of approximately 255 aircraft, comprising both narrow-body and wide-body models. The fleet includes:


Narrow-Body Aircraft:

• Airbus A320-200: 28 aircraft

• Boeing 737 MAX 8: 9 aircraft


Wide-Body Aircraft:

• Airbus A330-200: 3 aircraft

• Airbus A330-300: 7 aircraft

• Airbus A350-900: 34 aircraft

• Airbus A350-1000: 24 aircraft

• Airbus A380-800: 8 aircraft

• Boeing 777-200LR: 7 aircraft

• Boeing 777-300ER: 57 aircraft

• Boeing 787-8: 31 aircraft

• Boeing 787-9: 19 aircraft


Qatar Airways has also placed orders for additional aircraft to further modernize and expand its fleet:

• Airbus A321neo: 50 orders, with deliveries expected to begin in 2026. These will replace the existing A320-200s.

• Boeing 737 MAX 10: 25 orders, with options for an additional 25.

• Boeing 777-9: 60 orders, with deliveries anticipated by 2026.


This strategic expansion underscores Qatar Airways’ commitment to maintaining a modern and efficient fleet, enhancing passenger comfort, and optimizing operational performance.


 As of early 2025, here’s a comparative overview of the fleet composition for JetBlue Airways, Qatar Airways, United Airlines, Air Canada, American Airlines, and Emirates:

Aircraft Type

JetBlue Airways

Qatar Airways

United Airlines

Air Canada

American Airlines

Emirates

Airbus A220-300

44

—

—

—

—

—

Airbus A320-200

11

—

96

—

48

—

Airbus A321-200

35

—

65

—

218

—

Airbus A321neo

10

—

23

—

70

—

Airbus A330-200

—

3

—

8

—

—

Airbus A330-300

—

7

—

12

—

—

Airbus A350-900

—

34

—

—

—

—

Airbus A350-1000

—

24

—

—

—

—

Airbus A380-800

—

8

—

—

—

118

Boeing 737-800

—

—

141

39

303

—

Boeing 737 MAX 8

—

9

30

28

42

—

Boeing 737 MAX 9

—

—

70

—

30

—

Boeing 737 MAX 10

—

—

—

—

—

—

Boeing 747-8

—

—

—

—

—

—

Boeing 757-200

—

—

40

—

34

—

Boeing 757-300

—

—

21

—

—

—

Boeing 767-300ER

—

—

37

—

24

—

Boeing 767-400ER

—

—

16

—

—

—

Boeing 777-200

—

—

19

—

47

—

Boeing 777-200ER

—

—

55

—

47

—

Boeing 777-200LR

—

7

—

—

—

—

Boeing 777-300ER

—

57

22

19

20

133

Boeing 787-8

—

—

12

8

24

—

Boeing 787-9

—

—

38

29

25

—

Boeing 787-10

—

—

21

—

20

—

Note: The numbers above are approximate and based on available data as of early 2025. For the most current and detailed fleet information, please refer to the respective airlines’ official communications or financial disclosures.


This matrix provides a snapshot of the diverse aircraft types and their distribution across these major airlines, reflecting their strategic choices in fleet composition to meet various operational and market demands.




 # As of early 2025, here’s a comparative overview of the fleet composition for JetBlue Airways, Qatar Airways, United Airlines, Air Canada, American Airlines, and Emirates:

Aircraft Type

JetBlue Airways

Qatar Airways

United Airlines

Air Canada

American Airlines

Emirates

Airbus A220-300

44

—

—

—

—

—

Airbus A320-200

11

—

96

—

48

—

Airbus A321-200

35

—

65

—

218

—

Airbus A321neo

10

—

23

—

70

—

Airbus A330-200

—

3

—

8

—

—

Airbus A330-300

—

7

—

12

—

—

Airbus A350-900

—

34

—

—

—

—

Airbus A350-1000

—

24

—

—

—

—

Airbus A380-800

—

8

—

—

—

118

Boeing 737-800

—

—

141

39

303

—

Boeing 737 MAX 8

—

9

30

28

42

—

Boeing 737 MAX 9

—

—

70

—

30

—

Boeing 737 MAX 10

—

—

—

—

—

—

Boeing 747-8

—

—

—

—

—

—

Boeing 757-200

—

—

40

—

34

—

Boeing 757-300

—

—

21

—

—

—

Boeing 767-300ER

—

—

37

—

24

—

Boeing 767-400ER

—

—

16

—

—

—

Boeing 777-200

—

—

19

—

47

—

Boeing 777-200ER

—

—

55

—

47

—

Boeing 777-200LR

—

7

—

—

—

—

Boeing 777-300ER

—

57

22

19

20

133

Boeing 787-8

—

—

12

8

24

—

Boeing 787-9

—

—

38

29

25

—

Boeing 787-10

—

—

21

—

20

—

Note: The numbers above are approximate and based on available data as of early 2025. For the most current and detailed fleet information, please refer to the respective airlines’ official communications or financial disclosures.


This matrix provides a snapshot of the diverse aircraft types and their distribution across these major airlines, reflecting their strategic choices in fleet composition to meet various operational and market demands.



# # As of early 2025, here’s a comparative overview of the fleet composition for JetBlue Airways, Qatar Airways, United Airlines, Air Canada, American Airlines, and Emirates:

| **Aircraft Type** | **JetBlue Airways** | **Qatar Airways** | **United Airlines** | **Air Canada** | **American Airlines** | **Emirates** |

|:-:|:-:|:-:|:-:|:-:|:-:|:-:|

| Airbus A220-300 | 44 | — | — | — | — | — |

| Airbus A320-200 | 11 | — | 96 | — | 48 | — |

| Airbus A321-200 | 35 | — | 65 | — | 218 | — |

| Airbus A321neo | 10 | — | 23 | — | 70 | — |

| Airbus A330-200 | — | 3 | — | 8 | — | — |

| Airbus A330-300 | — | 7 | — | 12 | — | — |

| Airbus A350-900 | — | 34 | — | — | — | — |

| Airbus A350-1000 | — | 24 | — | — | — | — |

| Airbus A380-800 | — | 8 | — | — | — | 118 |

| Boeing 737-800 | — | — | 141 | 39 | 303 | — |

| Boeing 737 MAX 8 | — | 9 | 30 | 28 | 42 | — |

| Boeing 737 MAX 9 | — | — | 70 | — | 30 | — |

| Boeing 737 MAX 10 | — | — | — | — | — | — |

| Boeing 747-8 | — | — | — | — | — | — |

| Boeing 757-200 | — | — | 40 | — | 34 | — |

| Boeing 757-300 | — | — | 21 | — | — | — |

| Boeing 767-300ER | — | — | 37 | — | 24 | — |

| Boeing 767-400ER | — | — | 16 | — | — | — |

| Boeing 777-200 | — | — | 19 | — | 47 | — |

| Boeing 777-200ER | — | — | 55 | — | 47 | — |

| Boeing 777-200LR | — | 7 | — | — | — | — |

| Boeing 777-300ER | — | 57 | 22 | 19 | 20 | 133 |

| Boeing 787-8 | — | — | 12 | 8 | 24 | — |

| Boeing 787-9 | — | — | 38 | 29 | 25 | — |

| Boeing 787-10 | — | — | 21 | — | 20 | — |

# Note: The numbers above are approximate and based on available data as of early 2025. For the most current and detailed fleet information, please refer to the respective airlines’ official communications or financial disclosures.


# This matrix provides a snapshot of the diverse aircraft types and their distribution across these major airlines, reflecting their strategic choices in fleet composition to meet various operational and market demands.


From Blogger iPhone client