Friday, February 16, 2024

Oracle RAC 19c Installation on Google Cloud Platform

The following discourse details the proceedings of installing Oracle 19c Grid infrastructure on the Google Cloud VMware Engine (GCVE) platform. To access the most current version of Oracle Grid and Database, navigate to the hyperlink https://www.oracle.com/database/technologies/oracle19c-linux-downloads.html. Upon completion of the download and staging of the software on the server, the next step involves ensuring that the operating system requirements are satisfactory. This can be achieved by downloading the relevant operating system specifications from the hyperlink https://docs.oracle.com/en/database/oracle/oracle-database/19/ladbi/supported-red-hat-enterprise-linux-8-distributions-for-x86-64.html#GUID-B1487167-84F8-4F8D-AC31-A4E8F592374B.

It is vital to create the appropriate ASM disk groups before initiating the installation process. Once the disk groups have been successfully created, the installation process can commence.

unzip LINUX.X64_193000_grid_home.zip
cd /$GRID_HOME/
./gridSetup.sh














At this juncture, it can be ascertained that the Clusterware is operational and functioning optimally. The installation of the Oracle Grid cluster on the Google Cloud VMware Engine (GCVE) platform has been successfully executed.


During the course of the installation, it is possible that you may encounter multiple errors. If such an eventuality arises, please do not hesitate to contact me via email with a detailed description of the issue. I will respond promptly with the appropriate fixes. Some of the errors that you may encounter include, but are not limited to, those mentioned here. Rest assured that I am available to assist you in resolving any issues that may arise, and I look forward to hearing from you.

Errors:
 
[INS-44000] Passwordless SSH connectivity is not setup from the local node to the following nodes. [INS-41118] The interface (eth1) chosen as Public or Private is not on a shared subnet on the following nodes. PRVG-11640 : The check "Network Time Protocol (NTP)" was not performed as it is disabled. PRVF-7611 : Proper user file creation mask (umask). [INS-30024] Installer has detected that the location determined as Oracle Grid Infrastructure home. PRVG-1019 : The NTP configuration file.


Friday, January 5, 2024

GCP Interview Questions

 1. What is Google Cloud Platform (GCP)?

GCP is a suite of cloud computing services provided by Google. It offers various services, including computing power, storage, and databases, as well as machine learning, data analytics, and networking services. GCP enables organizations to build, deploy, and scale applications efficiently in the cloud.

2. Explain the key components of GCP.

Compute Engine: Provides virtual machines (VMs) for running applications.
App Engine: A platform-as-a-service (PaaS) offering for building and deploying applications without managing the underlying infrastructure.
Kubernetes Engine: A managed Kubernetes service for container orchestration.
Cloud Storage: Object storage service for storing and retrieving data.
BigQuery: Serverless data warehouse for analytics.
Cloud SQL: Managed relational database service.
Cloud Pub/Sub: Messaging service for building event-driven systems.
Cloud Spanner: Globally distributed, horizontally scalable database.

3. Explain the difference between Compute Engine and App Engine.

Compute Engine: Infrastructure as a Service (IaaS) offering that provides virtual machines. Users have full control over the VMs, including the operating system and software configurations.
App Engine: Platform as a Service (PaaS) offering that abstracts away the infrastructure details. Developers focus on writing code, and Google manages the underlying infrastructure, automatically scaling as needed.

4. What is Kubernetes, and how does GCP support it?

Kubernetes: An open-source container orchestration platform for automating the deployment, scaling, and management of containerized applications.
GCP Kubernetes Engine: A managed Kubernetes service that simplifies the deployment and operation of Kubernetes clusters. It automates tasks like cluster provisioning, scaling, and upgrades.

5. Explain Cloud Storage Classes in GCP.

Standard: General-purpose storage with high performance and low latency.
Nearline: Designed for data accessed less frequently but requires low latency when accessed.
Coldline: Suited for archival data with infrequent access.
Archive: Lowest-cost option for long-term storage with rare access.

6. How does Cloud Identity and Access Management (IAM) work in GCP?

IAM: Manages access control by defining roles and assigning them to users or groups.
Roles: Define permissions, and users are granted those roles.
Principals: Entities that can request access, such as users, groups, or service accounts.

7. Explain Google Cloud Pub/Sub.

Pub/Sub: A messaging service for building event-driven systems. Publishers send messages to topics, and subscribers receive messages from subscriptions to those topics.
Topics: Channels for publishing messages.
Subscriptions: Named resources representing the stream of messages from a single, specific topic.

8. What is Google Cloud BigQuery, and how is it different from traditional databases?

BigQuery: A fully managed, serverless data warehouse for analytics. It enables super-fast SQL queries using the processing power of Google's infrastructure.
Differences: BigQuery is designed for analytical workloads and can handle massive datasets with high concurrency, while traditional databases are often optimized for transactional workloads.

9. Explain the concept of Virtual Private Cloud (VPC) in GCP.

VPC: A private network for GCP resources. It provides isolation, segmentation, and control over the network environment.
Subnets: Segments of the IP space within a VPC, allowing for further network isolation.
Firewall Rules: Control traffic to and from instances.

10. What are Cloud Functions in GCP?

Cloud Functions: Serverless compute service that allows you to run event-triggered functions without provisioning or managing servers.
Event Sources: Triggers for Cloud Functions, such as changes in Cloud Storage, Pub/Sub messages, or HTTP requests.

11. What is Stackdriver in GCP?

Stackdriver: Stackdriver is a comprehensive observability suite in GCP that includes logging, monitoring, trace analysis, and error reporting. It provides tools for developers and operators to gain insights into the performance, availability, and overall health of their applications.

12. Explain Stackdriver Logging.

Stackdriver Logging: A fully-managed logging service that allows you to store, search, analyze, and alert on log data. It collects log entries from applications and infrastructure and provides a centralized location for log management.

13. What are log entries in Stackdriver Logging?

Log Entries: Records of events generated by GCP resources. Each log entry has a timestamp, severity level, log name, and payload containing specific information about the event.

14. How can you view logs in Stackdriver Logging?

Stackdriver Console: You can view logs interactively in the Stackdriver Logging console. It provides a user-friendly interface to search, filter, and analyze logs.

15. Explain the concept of Log Severity Levels in Stackdriver Logging.

Severity Levels: Indicate the importance of a log entry. Levels include DEBUG, INFO, NOTICE, WARNING, ERROR, and CRITICAL. Setting and using severity levels helps in identifying and addressing issues effectively.

16. What is Stackdriver Monitoring?

Stackdriver Monitoring: A service that provides visibility into the performance, uptime, and overall health of applications and services. It includes dashboards, alerting policies, and metrics collection.

17. Explain Stackdriver Dashboards.

Dashboards: Customizable visual displays that allow users to aggregate and display metrics and charts for monitoring purposes. Dashboards can include charts, text widgets, and predefined components.

18. How does Stackdriver Monitoring use Metrics?

Metrics: Quantitative measurements representing the behavior of a system over time. Stackdriver Monitoring collects and stores metrics that help in understanding the performance and health of resources.

19. What is an Alert Policy in Stackdriver Monitoring?

Alert Policy: Defines conditions for triggering alerts based on specified metrics and thresholds. When conditions are met, notifications can be sent via various channels like email, SMS, or third-party integrations.

20. Explain the integration of Stackdriver Trace with Logging and Monitoring.

Stackdriver Trace: A distributed tracing service that allows you to trace the performance of requests as they travel through your application.
Integration: Trace data can be correlated with logs and monitoring metrics in Stackdriver, providing a comprehensive view of the application's behavior.

21. How can you export logs from Stackdriver Logging?

Export Sinks: You can export logs to other Google Cloud services, Cloud Storage, or external systems using export sinks. This allows for archiving, analysis, and integration with third-party tools.

22. Explain the concept of Metrics Explorer in Stackdriver Monitoring.

Metrics Explorer: A tool in the Stackdriver Monitoring console that allows users to explore and visualize metrics data. It provides a flexible interface for creating custom charts and analyzing metric data.

23. How does Stackdriver handle autoscaling in GCP?

Autoscaler: Stackdriver provides autoscaling policies that use metrics to dynamically adjust the number of instances in a managed instance group. This ensures optimal utilization of resources based on demand.

24. What is the purpose of Stackdriver Error Reporting?

Error Reporting: Automatically detects and aggregates errors produced by applications. It provides insights into the frequency and impact of errors, helping identify and resolve issues.

25. How can you set up alerting in Stackdriver Monitoring?

Alerting Policies: You can create alerting policies in Stackdriver to define conditions for triggering alerts. These policies can be associated with specific resources, and notifications can be configured for various channels.

Wednesday, November 1, 2023

Autonomous Health Framework (AHF) Installation

Autonomous Health Framework (AHF) is a one-stop tool to diagnose all of your system. You can download the latest Autonomous Health Framework (AHF) from the Oracle site from Autonomous Health Framework (AHF) - Including TFA and ORAchk/EXAChk (Doc ID 2550798.1). 

If you hit any errors related to Oracle Trace File Analyzer (TFA), you can check https://sajidkhadarabad.blogspot.com/2018/10/tfa-00104-tfa-00002-oracle-trace-file.html

[root@sajidserver01 tfa]# unzip AHF-LINUX_v23.9.0.zip

Archive:  AHF-LINUX_v23.9.0.zip
replace ahf_setup? [y]es, [n]o, [A]ll, [N]one, [r]ename: A
  inflating: ahf_setup
 extracting: ahf_setup.dat
  inflating: README.txt
  inflating: oracle-tfa.pub

[root@sajidserver01 tfa]# ./ahf_setup -local
AHF Installer for Platform Linux Architecture x86_64
AHF Installation Log : /tmp/ahf_install_239000.log
Starting Autonomous Health Framework (AHF) Installation
AHF Version: 23.9.0 Build Date: 202310
AHF is already installed at /usr/tfa/oracle.ahf
Installed AHF Version: 23.4.2 Build Date: 202305
Do you want to upgrade AHF [Y]|N : Y
Upgrading /usr/tfa/oracle.ahf
Shutting down AHF Services
Upgrading AHF Services
Beginning Retype Index
TFA Home: /usr/tfa/oracle.ahf/tfa
Moving existing indexes into temporary folder
Index file for index moved successfully
Index file for index_metadata moved successfully
Index file for complianceindex moved successfully
Moved indexes successfully
Starting AHF Services
No new directories were added to TFA
Directory /usr/grid/crsdata/sajidserver01/trace/chad was already added to TFA Directories.
Do you want AHF to store your My Oracle Support Credentials for Automatic Upload ? Y|[N] : N
.-----------------------------------------------------------------.
| Host     | TFA Version | TFA Build ID          | Upgrade Status |
+----------+-------------+-----------------------+----------------+
| Sajidserver01 |  23.9.0.0.0 | 23090002023| UPGRADED       |
| Sajidserver02 |  23.9.0.0.0 | 23090002023| UPGRADED       |
'----------+-------------+-----------------------+----------------'
Setting up AHF CLI and SDK
AHF is successfully upgraded to latest version.

[root@sajidserver01 bin]# ./tfactl status
.--------------------------------------------------------------------------------------------------.
| Host     | Status of TFA | PID    | Port | Version    | Build ID              | Inventory Status |
+----------+---------------+--------+------+------------+-----------------------+------------------+
| sajidserver01 | RUNNING       | 368155 | 8200 | 23.9.0.0.0 | 23090002023| COMPLETE         |
| sajidserver02 | RUNNING       | 942953 | 8200 | 23.9.0.0.0 | 23090002023 | COMPLETE         |
'----------+---------------+--------+------+------------+----

[root@sajidserver01 bin]# ./tfactl toolstatus
Running command tfactltoolstatus on sajidserver01 ...
.------------------------------------------------------------------.
|                  TOOLS STATUS - HOST : sajidserver01                 |
+----------------------+--------------+--------------+-------------+
| Tool Type            | Tool         | Version      | Status      |
+----------------------+--------------+--------------+-------------+
| AHF Utilities        | alertsummary |       23.0.9 | DEPLOYED    |
|                      | calog        |       23.0.9 | DEPLOYED    |
|                      | dbglevel     |       23.0.9 | DEPLOYED    |
|                      | grep         |       23.0.9 | DEPLOYED    |
|                      | history      |       23.0.9 | DEPLOYED    |
|                      | ls           |       23.0.9 | DEPLOYED    |
|                      | managelogs   |       23.0.9 | DEPLOYED    |
|                      | menu         |       23.0.9 | DEPLOYED    |
|                      | param        |       23.0.9 | DEPLOYED    |
|                      | ps           |       23.0.9 | DEPLOYED    |
|                      | pstack       |       23.0.9 | DEPLOYED    |
|                      | summary      |       23.0.9 | DEPLOYED    |
|                      | tail         |       23.0.9 | DEPLOYED    |
|                      | triage       |       23.0.9 | DEPLOYED    |
|                      | vi           |       23.0.9 | DEPLOYED    |
+----------------------+--------------+--------------+-------------+
| Development Tools    | oratop       |       14.1.2 | DEPLOYED    |
+----------------------+--------------+--------------+-------------+
| Support Tools Bundle | darda        | 2.10.0.R6036 | DEPLOYED    |
|                      | oswbb        | 22.1.0AHF    | RUNNING     |
|                      | prw          | 12.1.13.11.4 | RUNNING     |
'----------------------+--------------+--------------+-------------'
Note :-
  DEPLOYED    : Installed and Available - To be configured or run interactively.
  NOT RUNNING : Configured and Available - Currently turned off interactively.
  RUNNING     : Configured and Available.

[root@sajidserver01 bin]# ./tfactl -help
Usage : /usr/19.0.0/grid/bin/tfactl <command> [options]
    commands:diagcollect|analyze|ips|run|start|stop|enable|disable|status|print|access|purge|directory|host|set|toolstatus|uninstall|diagnosetfa|syncnodes|upload|availability|rest|events|search|changes|isa|blackout|rediscover|modifyprofile|refreshconfig|get|version|floodcontrol|queryindex|index|purgeindex|purgeinventory|set-sslconfig|set-ciphersuite|collection

Wednesday, October 4, 2023

Mysqldump: Error 2020: Got packet bigger than 'max_allowed_packet'

When utilizing mysqldump utility for database backup, it is possible to encounter an error as mentioned below when dealing with a database of considerable size.

[mysql@Sajidserver ~]$mysqldump -u root -p<pwd> <DB Name> > /u03/backup.sql 
mysqldump: Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table `log` at row:

To achieve a successful backup, it is recommended to utilize the command "--max_allowed_packet=1024M" before executing the mysqldump utility, as this will address the issue at hand.

[mysql@Sajidserver ~]$mysqldump -u root -p<pwd> <DB Name> > /u03/backup.sql --max_allowed_packet=1024M
[mysql@Sajidserver ~]$

You can even edit the my.cnf file with max_allowed_packet=1024M save it and run the backup normally.

Thursday, July 13, 2023

Machine Learning

Machine learning is a branch of artificial intelligence that provides computers with the ability to learn without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it to learn for themselves.

Machine learning is the study of algorithms that modify their behavior as they process new data.

Machine learning algorithms are used in many areas, including but not limited to:

1. Image Recognition
2. Natural Language Processing (NLP)
3. Robotics
4. Facial Recognition
5. Stock Market Analysis

Machine Learning is the science of getting computers to act without being explicitly programmed.

Machine learning algorithms can be broken down into two categories: supervised and unsupervised. Supervised learning algorithms use input data that has been labeled by a human to train the machine, while unsupervised learning algorithms do not have this labeled data and instead look for patterns in the raw data itself.

Supervised learning algorithms can be further divided into regression and classification models. Regression models are used to predict continuous values, such as stock prices over time or how quickly a car will drive on a highway, while classification models are used to predict discrete values, such as whether or not someone has cancer or if a person has purchased something on a website before.


Machine learning is the field of study that gives computers the ability to learn without being explicitly programmed.

The two main approaches to machine learning are supervised and unsupervised learning. In supervised learning, the data has an underlying structure that is known to a human. This data can be labeled (that is, associated with a label indicating its true value) or unlabeled (no labels are given to indicate what the correct answers are). In unsupervised learning, the data does not have an underlying structure that is known to a human. Instead, algorithms can be used to group together items based on their similarity.

Machine learning algorithms can be grouped into three broad categories linear methods, non-linear methods and kernel methods. Linear methods include classification and regression. Non-linear methods include clustering and anomaly detection kernel methods including support vector machines and Gaussian processes.

Monday, June 26, 2023

Change Oracle Database Compatible Parameter in Primary and Standby Servers

To change the compatibility parameter to 19.0.0.0 on both Primary and Standby servers, follow the below steps. Please note that this process requires database downtime. Begin by changing the compatibility in Standby, followed by the Primary server.


SQL> SELECT value FROM v$parameter WHERE name = 'compatible';

VALUE
-------------------------------------------------------------
12.2.0

ALTER SYSTEM SET COMPATIBLE= '19.0.0.0' SCOPE=SPFILE SID='*';

Bounce the Standby database in the mounted state and restart the Managed Recovery Process.

[oracle@sajidserver01 ~]$ srvctl stop database -d sajid_texas
[oracle@sajidserver01 ~]$ srvctl start database -d sajid_texas -o mount


alter database recover managed standby database disconnect from session;

Now change the compatibility on the Primary database, make sure you get a proper rman backup of your database before doing it. If you want to revert back to the compatibility.

ALTER SYSTEM SET COMPATIBLE= '19.0.0.0' SCOPE=SPFILE SID='*';

Bounce the Primary database now and make sure there is no lag in DGMGRL.

[oracle@sajidserver01 ~]$ srvctl stop database -d sajid_pittsburgh
[oracle@sajidserver01 ~]$ srvctl start database -d sajid_pittsburgh

SQL> SELECT value FROM v$parameter WHERE name = 'compatible';

VALUE
-------------------------------------------------------------
19.0.0.0

DGMGRL> show configuration;
Configuration - SAJID_CONF
  Protection Mode: MaxPerformance
  Members:
  sajid_pittsburgh - Primary database
  sajid_texas - Physical standby database
Fast-Start Failover:  Disabled
Configuration Status:
SUCCESS   (status updated 50 seconds ago)

Your Database compatibility at this stage is changed to 19.0.0.0. It is one Pre-requirement to install or upgrade Oracle Enterprise Monitor to version 13.5.


Friday, March 17, 2023

ORA-28017: The password file is in the legacy format.

 If you’ve ever encountered the ORA-28017: The password file is in the legacy format error, you know how frustrating it can be to solve. This Oracle database error occurs when a user attempts to connect using an old version of the Oracle Database that uses a pre-12c password file. In this blog post, we will discuss what this error means and provide steps for resolving it. 

The first step in resolving the ORA-28017: The Password File Is In Legacy Format Error is understanding why it occurred in the first place. As mentioned earlier, this issue typically arises when users attempt to connect with an older version of Oracle Database that uses a pre-12c password file format instead of its current 12c secure file format (also known as SYSKM). When attempting such connections with these outdated versions, they may encounter errors like “ORA 28017” or “Password File Is In Legacy Format". 

To resolve this issue quickly and easily without having to upgrade your entire system or reinstall software packages, we suggest following these steps : 

  1. Check if there are any existing legacy Password files on your server by running "ls -ltr" command which lists all files & directories present under the root directory. If yes, then delete them immediately using rm -rf command followed by the filename.
  2. Create new secure file-based passwords using "orapwd" utility provided by default within ORACLE_HOME/bin directory. Please refer to DOC ID 2112456 for more details about creating secure files based passwords & related troubleshooting tips.  


SQL> create user asmsnmp identified by <password>;

create user asmsnmp identified by <password>

*

Error at line 1:

ORA-28017: The password file is in the legacy format.

Check the output srvctl config ASM

[grid@sajidahmed ~]$ srvctl config ASM

ASM home: <CRS home>
Password file: orapwASM
Backup of Password file: <Location>
ASM listener: LISTENER
ASM instance count: 2
Cluster ASM listener: ASMNETLSNR

sqlplus / as sysasm


SQl> alter system flush passwordfile_metadata_cache;
SQL> select * from v$pwfile_users;
USERNAME SYSDB SYSOPER SYSASM SYSBACKUP SYSDG SYSKM     CON_ID
--------------- -----   -----     -----   -----   -----   ---
SYS          TRUE  TRUE  FALSE FALSE FALSE FALSE          0


Now create the asmsnmp user and grant it privileges, this issue will be resolved.

[grid@sajidahmed ~]$ asmcmd orapwusr --add ASMSNMP

Enter Password: *************

[grid@sajidahmed ~]$ asmcmd orapwusr --grant sysasm ASMSNMP

[grid@sajidahmed ~]$ sqlplus asmsnmp/<password> as sysasm
SQL*Plus: Release 19.0.0.0.0 - test on Fri Mar 17 13:31:06 2023
Version 19.18.0.0.0
Copyright (c) 1982, 2022, Oracle.  All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.18.0.0.0
SQL> show user;
USER is "ASMSNMP"


By following these simple steps above, one should be able to resolve their issues related to "ORA 28017: The Password File Is In Legacy Format" errors very quickly without much effort!


Wednesday, February 8, 2023

Data Migration in AWS, GCP, AZURE, and OCC

  1. Data Migration: What Is It?
  2. Many databases and data kinds
  3. Naming Conventions for data migration technologies used in AWS, GCP, AZURE, and OCC
  4. Data migration tools and strategies from on-premises to cloud services like AWS, GCP, and OCC
  5. Advantages of both on-premises and cloud databases
  6. Conclusion


The process of moving data from one system to another, such as from an old database to a new one or from an on-premises system to a cloud-based system, is referred to as Data Migration.

Different database management systems (DBMS) including MySQL, Oracle, Postgres, DB2 and SQL Server are used to manage various sorts of data and databases, including structured data (like that found in a relational database) and unstructured data (such text and images).

Data transfer technologies like the AWS Database Migration Service, GCP Database Migration Service, and OCC Data Transfer Service are all offered by AWS, GCP, and OCC (Oracle Cloud Infrastructure). Data migration between several databases and/or cloud platforms is possible using these technologies.

AWS, GCP, and OCC all offer a variety of data migration tools to help users move data between their different services and platforms.

AWS:
  • AWS Data Migration Service (DMS): A fully managed service that makes it easy to migrate data to and from various databases, data warehouses, and data lakes.
  • AWS Schema Conversion Tool (SCT): A tool that helps convert database schema and stored procedures to be compatible with the target database engine.
  • AWS Database Migration Service (DMS) and AWS SCT can be used together to migrate data and schema both.

GCP:
  • Google Cloud Storage Transfer Service: A fully managed service that allows you to transfer large data sets from on-premises storage to Cloud Storage.
  • Google Cloud Storage Nearline: A storage service that stores data at a lower cost but with a slightly longer retrieval time.
  • Google Cloud SQL: A fully-managed relational database service that makes it easy to set up, maintain, manage, and administer your relational databases on Google Cloud.
  • Cloud Dataflow
  • Cloud Dataproc
  • Cloud SQL
  • Cloud Spanner

Azure:
  • Azure Database Migration Service (DMS)
  • Azure Data Factory
  • Azure Data Lake Storage Gen1
  • Azure Data Lake Storage Gen2
  • Azure Databricks
  • Azure Stream Analytics

OCC:
  • Oracle Cloud Infrastructure Data Transfer Appliance: A physical appliance that allows you to transfer large data sets from your on-premises data center to Oracle Cloud.
  • Oracle Cloud Infrastructure FastConnect: A service that provides a dedicated, private connection between your on-premises data center and Oracle Cloud.
  • Oracle Cloud Infrastructure File Transfer: A service that allows you to transfer files between your on-premises data center and Oracle Cloud.
  • Data Pump
  • Data Integrator
  • Data Migration Assistant
  • GoldenGate
  • SQL Developer



The process typically involves several steps:

  • Identification of the data that needs to be migrated
  • Planning for the migration, including assessing the data's size and complexity, determining the necessary resources, and developing a migration schedule
  • Backup of the existing data
  • Testing the migration process
  • Execution of the migration
  • Verification of the migrated data
  • Switchover to the new cloud-based system
  • Post-migration monitoring and maintenance
  • It's important to note that the specifics of data migration to the cloud can vary depending on the specific cloud service provider and the type of data being migrated.

When migrating data from an on-premise system to a cloud-based system such as AWS, GCP, or OCC, the process typically involves several steps, such as assessing the current data and design, planning the migration, and testing the migrated data.

Cloud migration tools and strategies can include various options, such as using pre-built templates and scripts, leveraging cloud-native services, and utilizing third-party migration tools.

The benefits of on-prem and cloud databases can vary, with on-premise databases providing more control and customization while cloud-based databases often offer scalability and cost savings.

In conclusion, data migration is the process of transferring data from one system to another, with different types of data and databases, and various migration tools available in AWS, GCP, and OCC. Cloud migration tools and strategies can be used to migrate data from on-premise systems to cloud-based systems and the benefits and drawbacks of both on-premise and cloud databases.



Saturday, January 14, 2023

Artificial Intelligence (AI) Stages and Progress

 Artificial intelligence is a growing field of study, and it's essential to understand the possibilities and limitations of this technology. While there are many who believe that AI will soon replace human workers, there are others who think that AI has yet to even reach its full potential.

AI can help us automate tasks that would otherwise be done by humans, and it can also improve our lives in ways we never imagined possible before. The future of AI looks bright, and we can't wait to see what comes next!

Artificial Intelligence is on the rise. It's predicted to grow by 33% by 2023 and will only continue to expand as more companies discover its potential.

AI can be used to make work easier and more efficient, saving time and money for businesses. It can also help with data analysis, which allows you to understand your customers better and make better decisions about how you run your business. AI is going to be prevalent in the near future.

  • Reactive Machines
  • Limited Memory
  • Theory of Mind
  • Self-aware

AI has different levels, and each level of intelligence depicts a different stage in the development of AI. The three stages define the progress of Artificial Intelligence.

  1. The first stage is Artificial Narrow Intelligence (ANI). This refers to AI that can only perform one task at a time.
  2. The second stage is Artificial General Intelligence(AGI). This refers to AI that can perform multiple tasks with equal proficiency.
  3. The third stage is Artificial Super Intelligence(ASI). This refers to AI that has surpassed human intelligence in all areas.

Artificial Narrow Intelligence refers to any machine capable of performing a single task. This includes things like Siri, which is capable only of performing vocal commands and internet searches. It is considered narrow because it doesn't possess intelligence in the same way humans do. ANI systems can't think abstractly, form their own opinions or generalize skills across different tasks.

Artificial General Intelligence refers to any machine able to perform any human-like task, indistinguishable from human performance. Thus far, most of these systems have not yet been created or tested in the wild.

Artificial Super Intelligence refers to any machine that can outperform humans in virtually every single way possible. Again, this is theoretical, as the skill sets required for superintelligence don't necessarily exist on Earth today. For example, take education research. They simply don't have enough information to accurately predict every scenario and solve it right now. And until they do, AI will not be able to fully replace teachers as educators.

I will come up with more interesting topics on Machine Learning in the next blog. Until then have Good Fortune.


Thursday, December 1, 2022

EMD pingOMS error: unable to connect to http server

 When you are working on OEM agent upgrade. Sometimes you might face an error EMD pingOMS error: unable to connect to http server at OMShost. [handshake has no peer].

It is a sort of bug, with no proper fix. You can try various workarounds related to OEM agents, but most of them will not be helpful.

Error:

oracle@sajidserver01.com:$ ./emctl status agent
Oracle Enterprise Manager Cloud Control 13c Release 4
Copyright (c) 1996, 2018 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
Agent Version      : 13.4.0.0.0
OMS Version        :
(unknown)
Protocol Version   : 12.1.0.1.0
Agent Home         : /u05/app/oracle/agent13c/agent_inst
Agent Log Directory: /u05/app/oracle/agent13c/agent_inst/sysman/log
Agent Binaries     : /u05/app/oracle/agent13c/agent_13.4.0.0.0
Core JAR Location  : /u05/app/oracle/agent13c/agent_13.4.0.0.0/jlib
Agent Process ID       : 1769
Parent Process ID      : 1668
Agent URL              : <Link>
Local Agent URL in NAT : <Link>
Repository URL              :  <Link>
Started at                  : 2022-11-22 10:04:29
Started by user             : oracle
Operating System            : AIX version 7.1 (ppc64)
Number of Targets           : 16
Last Reload                 : (none)
Last successful upload           : 2022-11-22 16:28:12
Last attempted upload            : 2022-11-22 16:28:12
Total Megabytes of XML files uploaded so far   :18.21
Number of XML files pending upload             : 0
Size of XML files pending upload(MB)           : 0.01
Available disk space on upload filesystem      : 52.07%
Collection Status                            : Collections enabled
Heartbeat Status                             :
OMS is unreachable
Last attempted heartbeat to OMS              : 2022-11-22 16:28:29
Last successful heartbeat to OMS             : (none)
Next scheduled heartbeat to OMS              : 2022-11-22 16:28:29
---------------------------------------------------------------
Agent is Running and Ready

oracle@sajidserver01.com:$  ./emctl pingOMS
Oracle Enterprise Manager Cloud Control 13c Release 4
Copyright (c) 1996, 2018 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
EMD pingOMS error: unable to connect to http server at OMShost. [handshake has no peer]

oracle@sajidserver01.com:$  ./emctl upload agent
Oracle Enterprise Manager Cloud Control 13c Release 4
Copyright (c) 1996, 2018 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
EMD upload error:full upload has failed: uploadXMLFiles skipped :: OMS version not checked yet. If this issue persists check trace files for ping to OMS related errors. (OMS_DOWN)

Solution:

Drop the OEM targets related to this server from OEM. Decommission the agent as well. After that try installing the new agent. The beauty of this is you do not need to reconfigure the targets again, all the targets will be configured automatically. It will be an OEM self-healing process as part of automation.

Sunday, November 6, 2022

Oracle Server Database components status as Upgraded rather than Valid after Upgrade

A few times the database components will be in upgraded mode in the PDB once you have completed your database upgrade from 12c to version 19c.

column comp_name format a30
column status format a10
select comp_name, version, status from dba_registry;
COMP_NAME                              VERSION              STATUS
--------------------------------   ------------------      ----------
Oracle Database Catalog Views          19.0.0.0.0           UPGRADED
Oracle Database Packages and Types     19.0.0.0.0           UPGRADED
JServer JAVA Virtual Machine           19.0.0.0.0           UPGRADED
Oracle XDK                             19.0.0.0.0           UPGRADED
Oracle Database Java Packages          19.0.0.0.0           UPGRADED

SYS@sajiddb1>alter session set "_oracle_script"=TRUE;
session altered

SYS@sajiddb1>alter pluggable database pdb$seed close immediate instances=all;
Pluggable database altered.

SYS@sajiddb1>alter pluggable database pdb$seed OPEN READ WRITE force;
Pluggable database altered.

SYS@sajiddb1>alter session set container=PDB$SEED;
session altered

To resolve it follow the below steps.

@ORACLE_HOME/rdbms/admin/catalog.sql
@ORACLE_HOME/rdbms/admin/catproc.sql
@ORACLE_HOME/rdbms/admin/utlrp.sql

Check the components, it will be normal now.

SYS@sajiddb1>alter pluggable database pdb$seed close immediate instances=all;
Pluggable database altered.

SYS@sajiddb1>alter pluggable database pdb$seed OPEN READ ONLY force;
Pluggable database altered.

SYS@sajiddb1>show pdbs;
CON_ID  CON_NAME     OPEN MODE   RESTRICTED
------ ------------ ----------- ------------
 2       PDB$SEED     READ ONLY    NO

SYS@sajiddb1>alter session set "_oracle_script"=FALSE;
session altered


column comp_name format a30
column status format a10
select comp_name, version, status from dba_registry;
COMP_NAME                              VERSION               STATUS
--------------------------------    ----------------      ----------
Oracle Database Catalog Views          19.0.0.0.0             VALID
Oracle Database Packages and Types     19.0.0.0.0             VALID
JServer JAVA Virtual Machine           19.0.0.0.0             VALID
Oracle XDK                             19.0.0.0.0             VALID
Oracle Database Java Packages          19.0.0.0.0             VALID

Friday, September 2, 2022

ORA-01586: database must be mounted EXCLUSIVE and not open for this operation

 You can choose among multiple options to drop Oracle databases. 

1. DBCA (Silent Method or GUI)

2. Command Line Interface (startup mount restrict and then issue drop database) using sqlplus or rman
3. Manually cleaning up files

SQL> drop database;
drop database
*
ERROR at line 1:
ORA-01586: database must be mounted EXCLUSIVE and not open for this operation

We are decommissioning the RAC database here. So we need to change the parameter cluster_database=FALSE.

Set your database profile.

srvctl stop database -d <DB_name>
SQL> startup mount;
SQL> alter system enable restricted session;
SQL> alter system set cluster_database=FALSE scope=spfile;
SQL>Drop Database;
Database dropped.

How to find temporary MySQL Root Password after installation

 Sometimes once MySQL installation is done. You will not be able to find the temporary password, here I was working on the latest MySQL version 8.0.30-commercial MySQL Enterprise Server.

[root@Sajidserver01 ~]# grep -i pass /var/log/mysqld.log

Remove the /var/lib/mysql and restart mysqld as below

[root@Sajidserver01 ~]#cd /var/lib
[root@Sajidserver01 ~]#rm -rf mysql 
[root@Sajidserver01 ~]#systemctl stop mysqld
[root@Sajidserver01 ~]#systemctl start mysqld

Once it is done, you will find the new temporary password generated in mysqld.log below.

[root@Sajidserver01 ~]# grep -i pass /var/log/mysqld.log 
2022-09-01T21:32:34.029030Z 6 [Note] [MY-0454] [Server] A temporary password is generated for root@localhost: S2<3!chrde

Once it is done you can reset your password as per your standards using /usr/bin/mysql_secure_installation.


Monday, August 1, 2022

ORA-16525: The Oracle Data Guard broker is not yet available in the dataguard environment

 Sometimes you might be facing an error ORA-16525: The Oracle Data Guard broker is not yet available in the dataguard environment. It might occur because of multiple reasons, but one of the main reasons, is if the particular server is having the issue on which real-time apply is being applied gets rebooted. Then the real-time apply might be running on a different node and goes in hung mode. You can follow the below steps to remediate it.

[oracle@sajidserver01 ~]$ dgmgrl

DGMGRL for Linux: Release 19.0.0.0.0 - Production on Thu Jul 21 12:07:36 2022
Version 19.15.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/<password>;
Connected as SYSDG.
DGMGRL> show configuration;
ORA-16525: The Oracle Data Guard broker is not yet available.
Configuration details cannot be determined by DGMGRL

###
Fix:
###

Bounce the standby database and enable the configuration in dgmgrl.

[oracle@sajidserver01 ~]$ srvctl stop database -d sajid_texas
[oracle@sajidserver01 ~]$ srvctl start database -d sajid_texas

[oracle@sajidserver01 ~]$ dgmgrl

DGMGRL for Linux: Release 19.0.0.0.0 - Production on Thu Jul 21 12:07:36 2022
Version 19.15.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/<password>;
Connected as SYSDG.

DGMGRL> enable configuration;
Enabled.

DGMGRL> show configuration;
Configuration - SAJID_CONF
  Protection Mode: MaxPerformance
  Members:
  sajid_pittsburgh     - Primary database
  sajid_texas - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS   (status updated 27 seconds ago)

You will see the alert log will be updated with an attempt to start the background Managed Standby Recovery process. MRP0: Background Managed Standby Recovery process started and the Managed Standby Recovery started Real-Time Apply.


Tuesday, July 12, 2022

ORA-29548:Java system reported: release mismatch

 When you are running the Preupgrade task to upgrade the database from 12c to 19.14.0.0.0 version either using DBUA or the Auto upgrade feature. Sometimes you might face the below error depending on your system. If you happen to run into below error, you can follow the below steps. One of the reasons you may find a similar issue is if an OJVM patch is applied or rolled back.

Error:

Contact Oracle Support for instructions on how to resolve this error. 

Error: ORA-29548 ORA-29548:Java system reported: release mismatch, 

12.2.0.1.180717 1.8 in database (classes.bin) vs 12.2.0.1.0 1.8 in executable

SQL> select dbms_java.lonname('TEST') from dual;

select dbms_java.lonname('TEST') from dual

                        *

ORA-29548: Java system class reported: joxcsys: release mismatch, 

12.2.0.1.180717 1.8 in database (classes.bin) vs 12.2.0.1.0 1.8 in executable

SQL> conn /as sysdba

Connected.

SQL>  @?/javavm/install/update_javavm_db.sql

SQL> SET FEEDBACK 1

SQL> SET NUMWIDTH 10

SQL> SET LINESIZE 80

SQL> SET TRIMSPOOL ON

SQL> SET TAB OFF

SQL> SET PAGESIZE 100

SQL> alter session set "_ORACLE_SCRIPT"=true;

Session altered.


SQL> -- If Java is installed, do CJS.

SQL> -- If CJS can deal with the SROs inconsistent with the new JDK,

SQL> -- the drop_sros() call here can be removed.

SQL> call initjvmaux.drop_sros();

Call completed.

SQL> create or replace java system;

Java created.

SQL> update dependency$

  set p_timestamp=(select stime from obj$ where obj#=p_obj#)

  where (select stime from obj$ where obj#=p_obj#)!=p_timestamp and

  (select type# from obj$ where obj#=p_obj#)=29  and

  (select owner# from obj$ where obj#=p_obj#)=0;

161 rows updated.

SQL> commit;

SQL> alter session set "_ORACLE_SCRIPT"=false;

Session altered.

SQL> select dbms_java.longname('TEST') from dual;

DBMS_JAVA.LONGNAME('TEST')

-------------------------------------------------------------

TEST

1 row selected.


Once the above steps are followed and recompile the invalid objects. Now if you run Preupgrade script, the above error will not pop up. 

Happy Oracle Database Upgrades!!!