S.No | Command | Description |
1 | top | used to get cpu information,memory info,process id,user who is running the process(user),what command is running. |
2 | df -h | used to check whether any mount point id 100% in disk space utilization. |
3 | du | if in the df -h command,any mount point is full. then by using du cmd delete large unwanted file. |
4 | dmesg | Anything that is related to hardware issue can be found here. ex: issue with memory. issue with memory leak issue with mother board. issue with CPU crash. |
5 | iostat stands for I/o statistics | Gives read/write speed of each disk cmd: iostat 1 will refresh for every second and display the output of iostat in the same putty terminal session. |
6 | netstat -rnv netstat | more | print network connections,routing tables,interface statistics,masquerade connections and multicast memberships. |
7 | free | It is used to check physical memory and virtual memory. output of free command looks like: $free total used free shared buff/cache available Mem : 1016232 640252 79500 5152 296480 195024 Swap: 1048572 2724 1045848 Here: Mem - physical memory Swap - Virtual memory. |
8 | cat /proc/cpuinfo | To check all the CPU information of the system |
9 | cat /proc/meminfo | To check all the memory information of the system |
Helping you to protect your tomorrow
Saturday, 16 May 2020
Linux system monitoring
Create an AWS billing Alarm using cloud watch service
Use case:
After start using of AWS resources,AWS will start billing for the same.sometimes billing of AWS will reaches to beyond the expected price.This situation will leads to financial problems.
After start using of AWS resources,AWS will start billing for the same.sometimes billing of AWS will reaches to beyond the expected price.This situation will leads to financial problems.
Solution:
By following the below steps, an email alert will be sent to email.say that your AWS billing is reaching beyond your financial status.
By following the below steps, an email alert will be sent to email.say that your AWS billing is reaching beyond your financial status.
Case study:
Suppose Mr.John is an AWS administrator in one of the leading software company.His company maximum financial support for using AWS resources for one month is $50(fifty dollars).But in one of the month it reached to $75 & this leading Mr.john company in to financial problems.it also leads to increase of project cost & maintenance cost.Mr.John didn't noticed that AWS billing reached to $75.
Solution:
So to Avoid the above problem Mr.John can create an AWS billing Alarm using Cloud watch service,
for the value of $40 or $ 45. So if the AWS billing reaches to $40 or $45.He will get one email notification saying your AWS bill for the month reached to $40 or $45.Then Mr.john have better chances to make his company AWS billing as $50 next time onwards.
for the value of $40 or $ 45. So if the AWS billing reaches to $40 or $45.He will get one email notification saying your AWS bill for the month reached to $40 or $45.Then Mr.john have better chances to make his company AWS billing as $50 next time onwards.
1. Login to AWS Console screen.
2. Navigate to services-->under Management & Governance -->Cloudwatch.
As shown below.
3.Click on Billing option now.
4.Now click on Create alarm as highlighted below.
5. Enter the details like Currency,interval time for this check need to perform.
6. Choose with Create new topic as shown below,
2. Billing alarm is just description.
3.email address to which email notification need to receive.
4.final click on Create topic as highlighted below.
7. Then one mail will be received to the email address, which is provided in the just above screen,will get subscribe acknowledge email.Please accept AWS notifications in the mail.
8.
- Define unique name.
- Provide description.
- click on Next.
9. In the below Preview and create page,Click on Crete alarm.
Friday, 15 May 2020
Thursday, 14 May 2020
Roles in AWS & their significance
With the continuation of my previous posts:
Roles are basically used for connect one AWS service to other AWS service.
Here in this post, i am just creating one simple role, Between S3 bucket and EC2.
2. Now click on Create role as shown below.
Roles are basically used for connect one AWS service to other AWS service.
Ex: S3 bucket to EC2 or viceversa.
- Click on Roles option.in IAM pages.
Here in this post, i am just creating one simple role, Between S3 bucket and EC2.
2. Now click on Create role as shown below.
3.Now select the first service. Then click on Next permissions as shown below.
Wednesday, 13 May 2020
SAP ASE Backupserver fails to start with "The 'sem_open' call failed with error number 13 (Permission denied)"
issue:
- Backup server not able to start manually.
- while install SYBASE database. installation stopped with the error.
Asseration failed unable to start backup server.
Solution:
Snote no: 2794211
Snote no: 2794211
How to reset IAM user password
- ylog in to AWS console with root user.
- Navigate to below section.Services --> Security,identity, & Compliance --> IAM
- Then the below screen will appears--> click on user link under IAM Resources section
- Then click on user name, for which the password reset is required.
- with the above steps,password reset of IAM user is completed. How to inactivate IAM user access key:
with the continuation of above screens:From the security credentials tab--> go with inactive access key option as
Tuesday, 12 May 2020
Creating new IAM users
Following up to my previous post:
IAM user are defined global by default.Means if one IAM user created in one specific region,He/She can access in their AWS resources across other AWS regions also.
11. Then in the Dashboard section, user creation and Group creation tasks are with green status now.
Here are the practical steps involved in creating new IAM users in AWS
IAM user are defined global by default.Means if one IAM user created in one specific region,He/She can access in their AWS resources across other AWS regions also.
1. Click on Create individual IAM users option,As shown in the below screen shot.But before that make sure that you are in Dashboard section.
2.Click on Add user now.
3. Enter IAM user name,Access type,Console password parameters & Require password reset option details based on user specific. Then Click on Next permissions button in the right bottom.
4. Now if the AWS account is new one, and No user groups are created till now.Need to create one IAM user group here.
5.Provide the Group name and assign suitable Policy as shown below.
6. For correct select of Policy based on description of the policy and search in google for IAM user policies.and select correct policies based on requirements.
7. Then click on Create group option.As shown in the above screen right corner.
8. Review the parameters and click on Click review conformation.
9. Review again and Click on create user, Which is available in the right side corner.
10 in the below screen shot.
1. Represents for successfully creation of user.
2.Provides option to download IAM user deatils in Excel format.
3.Provides the details of Access key details.
4.security access key deatils.
5.Password details of IAM user to access AWS console.
11. Then in the Dashboard section, user creation and Group creation tasks are with green status now.
Saturday, 9 May 2020
intro to Identity Access Management, Enabling of MFA code authentication for root user in AWS.
What is IAM ?
Key Terminology for IAM:
1. user - end users
2. Groups - collection of users
where a set of authorizations are inherit by all the group memebers.
IAM allows users to manage users and their level of access to the AWS console.
Features of IAM:
1. Centralised control of your AWS account.
2. Shared Access to your AWS account.
3. Granular permissions.
which allow to access required services among all. can restrict services which don't need.
4.identity federation(including Active Directory, Facebook, LinkedIn etc)
- Active Directory: Potential user can login to AWS account using same credentials, which is used to login to physical host.
- Facebook credentials are used, in case of gaming applications and some data is stored in AWS account.
6. Allows you to set up your password rotation policy ( example for every 3 months).
Key Terminology for IAM:
1. user - end users
2. Groups - collection of users
where a set of authorizations are inherit by all the group memebers.
example: set of users only need access for S3 bucket
Set of users only need access for EC2.
3.Policies: JSON file,which is used to give permissions as to what a user/Group Role is able to do
4.Roles: users can create roles and assign them to AWS resource to perform some task.
ex: integration of EC2 with S3 bucket.
Practical Steps:
1. After login in to AWS console with credentials.
2.Navigate to below section.
Services --> Security,identity, & Compliance --> IAM
3.Then the below screenshot screen will appears.
where points 1,2 & 3 in the screen are for:
- This the Actual link, that an AWS admin can share with their AWS end users to access,resource allocated for them.
- Customize option: using this customize option, DNS name can be changes.if it is not access to any one else before.suppose acloudguru2020ryan is available in the IAM link. instead of that can use TESTIAM . the the link will changes to https://TESTIAM.signin.aws.amazon.com/console.
3.Then the copy button is used to copy the https link and share to concern one.
4. Then the very next steps in the above screen is to activate MFA(multi factor authorization) on your root account.
This step need to do, because if some one even have AWS root user credentials. He/She can't able to login to AWS account with out this MFA passcode.
To activate MFA for the root account:
- click on the Activate MFA on your root account (as shown in the above screenshot).
5.Then screen prompts to few conformation screens.read and conform them.
6.Click on Activate MFA as shown below.
8.Then download google google authenticator app from play store.to generate passcodes.
9.Once after the download of Google authenticator app from playstore. click on continue in the above screen shot.
9.Once after the download of Google authenticator app from playstore. click on continue in the above screen shot.
- For backup purpose better take screenshot of below QR code.
Then in the Google authenticator app, for every fixed intervel of time like for every 2 minutes, the passcodes will be generated back to back.As per the 3rd point in the point 9 screen shot.enter 2 consecutive MFA codes from google authenticator app.
14.Now in the Dashboard section also. The conformation screen will appears with green tick as shown below:
Hence enabling MFA code authentication for root user is configured successfully and completely. Creating new IAM users concept will be publish in my next post.
server information using Linux scripts
#!/bin/bash
# Usage: show-cmdb-info [-c]
# Display server information that may be requested for CMDB entries or inventory.
# Options:
# -c Ouput in csv format.
# Run as root
[ "$EUID" -eq 0 ] || {
echo 'Please run with sudo or as root.'
exit 1
}
FIELD_SEPARATOR=': '
# Pass in -c for CSV format
[ "$1" = '-c' ] && FIELD_SEPARATOR=','
# Host name
echo "Hostname${FIELD_SEPARATOR}$(uname -n)"
echo "FQDN${FIELD_SEPARATOR}$(hostname -f)"
# System model, product, serial
dmidecode -t system | egrep 'Manufacturer:|Product Name:|Serial Number:' | sed 's/^\s*//' | sed "s/: /${FIELD_SEPARATOR}/"
# OS, kernel, platform.
echo "Release${FIELD_SEPARATOR}$(cat /etc/redhat-release)"
echo "Kernel Release${FIELD_SEPARATOR}$(uname -r)"
echo "Architecture${FIELD_SEPARATOR}$(uname -m)"
# Memory:
# Assumes MB are being reported by dmidecode.
MEMORY_IN_MB=$(dmidecode --type memory | grep -e '^\sSize:' | grep -v "No" | awk '{sum+=$2} END{printf("%d\n",sum)}')
MEMORY_IN_GB=$(($MEMORY_IN_MB / 1024))
echo "Memory in GB${FIELD_SEPARATOR}${MEMORY_IN_GB}"
# CPU info:
TOTAL_PHYSICAL_CPUS=0
TOTAL_CORES=0
TOTAL_THREADS=0
echo "CPU Model${FIELD_SEPARATOR}$(cat /proc/cpuinfo | grep 'model name' | cut -f2 -d: | sort -u | sed 's/^\s*//')"
for PHYSICAL_CPU in $(cat /proc/cpuinfo | grep 'physical id' | sort -u | awk '{print $NF}')
do
CORES=$(cat /proc/cpuinfo | grep -E -m1 -A6 "physical.*:\ ${PHYSICAL_CPU}$" | grep -i 'cpu cores' | awk '{print $NF}')
THREADS=$(grep 'physical id' /proc/cpuinfo | grep ": ${PHYSICAL_CPU}$" | wc -l)
echo "CPU ${PHYSICAL_CPU} Cores${FIELD_SEPARATOR}${CORES}"
echo "CPU ${PHYSICAL_CPU} Threads${FIELD_SEPARATOR}${THREADS}"
TOTAL_PHYSICAL_CPUS=$(($TOTAL_PHYSICAL_CPUS + 1))
TOTAL_CORES=$(($TOTAL_CORES + $CORES))
TOTAL_THREADS=$(($TOTAL_THREADS + $THREADS))
done
echo "Total Physical CPUs${FIELD_SEPARATOR}${TOTAL_PHYSICAL_CPUS}"
echo "Total Cores${FIELD_SEPARATOR}${TOTAL_CORES}"
echo "Total Threads${FIELD_SEPARATOR}${TOTAL_THREADS}"
echo "Storage assigned to volume groups in GB${FIELD_SEPARATOR}$(vgs --units g --noheadings 2>&1 | grep -v "^|" | grep -v 'Could' | awk '{print $6}' | cut -f1 -d. | awk '{sum+=$1} END{printf("%d\n",sum)}')"
# NICS
echo "Network interfaces${FIELD_SEPARATOR}$(netstat -i | egrep -v 'Iface|Interface' | awk '{print $1}' | grep -v "^lo$" | sort | xargs)"
echo "IP Addresses${FIELD_SEPARATOR}$(ip -4 -o addr | awk '{print $4}' | cut -f1 -d/ | grep -v '127.0.0.1' | xargs)"
# Timezone
echo "Timezone${FIELD_SEPARATOR}$(date +%Z)"
Friday, 8 May 2020
Touch test in Linux, for identifying Read only file systems and poor performance mount points
#!/bin/bash
# Usage: touch-test
# Touches and deletes a file on each of the locally mounted file systems.
# This can help point out read-only mounts and poorly performing mounts.
# Run as root
[ "$EUID" -eq 0 ] || {
echo 'Please run with sudo or as root.'
exit 1
}
TEST_FILE='touch-test-file'
START=$(date)
START_SECONDS=$(date +%s)
for MOUNT in $(df -lP | egrep -v '^Filesystem|tmpfs' | awk '{print $NF}')
do
TEST_FILE_ON_MOUNT="${MOUNT}/${TEST_FILE}"
echo "$(date) - Touching $TEST_FILE_ON_MOUNT"
touch $TEST_FILE_ON_MOUNT
rm $TEST_FILE_ON_MOUNT
echo "$(date) - Removed $TEST_FILE_ON_MOUNT"
done
END=$(date)
END_SECONDS=$(date +%s)
TOTAL_SECONDS=$(($END_SECONDS - $START_SECONDS))
echo
echo "Start: $START"
echo "End: $END"
echo "Total: $TOTAL_SECONDS seconds"
Linux script for show-total-disk-space-used
#!/bin/bash
# Usage: show-total-disk-space-used
# Shows how much local disk space is in use by the server.
function round() {
# Returns a rounded number
local INTEGER=$(echo $1 | cut -f1 -d.)
[ -z "$INTEGER" ] && INTEGER=0
local DECIMAL=$(echo $1 | cut -s -f2 -d.)
[ -z "$DECIMAL" ] && DECIMAL=0
[ "$DECIMAL" -gt 4 ] && INTEGER=$(($INTEGER + 1))
echo $INTEGER
}
# Disk size used in kb, summed
KB=$(df -lkP | awk '{print $3}' | grep -v Used | awk '{sum+=$1} END{printf("%d\n",sum)}')
# Convert size to MB, GB, and TB
MB=$(round $(echo $KB/1024 | bc -l | sed -e "s/\(\.[0-9]\).*/\1/g"))
GB=$(round $(echo $KB/1024/1024 | bc -l | sed -e "s/\(\.[0-9]\).*/\1/g"))
TB=$(round $(echo $KB/1024/1024/1024 | bc -l | sed -e "s/\(\.[0-9]\).*/\1/g"))
# Use the largest human readable size to display
if [ "$TB" -gt 0 ]
then
TOTAL_DISK_SPACE_USED="${TB}T"
elif [ "$GB" -gt 0 ]
then
TOTAL_DISK_SPACE_USED="${GB}G"
elif [ "$MB" -gt 0 ]
then
TOTAL_DISK_SPACE_USED="${MB}M"
else
TOTAL_DISK_SPACE_USED="${KB}K"
fi
echo -e "$(uname -n)\t${TOTAL_DISK_SPACE_USED}"
Wednesday, 6 May 2020
2679373 - SWPM SYB: unable to set up backup server
Symptom
Error reported by SAP Installer is similar to the following:
Assertion failed: Unable to set up backup server. Refer to trace file sapinst_dev.log for further information.
Assertion failed: Unable to set up backup server. Refer to trace file sapinst_dev.log for further information.
Other Terms
ASE, backup server
Reason and Prerequisites
You are trying to install an SAP System based on ASE release 16.0 SP03 PL04 on the AIX platform. The host is configured to use virtual host names.
The SAP installer presents the message "Unable to set up backup server" and the installer's trace file sapinst_dev.log includes entries like:
Building Backup Server '<SID>_BS':
Writing entry into directory services...
Directory services entry complete.
Writing RUN_<SID>_BS file...
RUN_POP_BS file complete.
Starting server...
Unable to boot server '<SID>_BS'.
Task failed
Server '<SID>_BS' was not created.
The corresponding backup server log file /sybase/<SID>/ASE-16_0/install/<SID>_BS.log includes traces like:
Started listener at tcp <IP ADDRESS 1> 4902
Backup Server: 2.24.2.1: The host '<IP ADDRESS 2>' is not authorized to connect to this Backup Server.
Backup Server: 5.40.2.1: Login host authentication has failed.
The root cause of the issue is that backup server does not recognize that IP ADDRESS 1 and IP ADDRESS 2 are the same host and requires a "remote host configuration". Further information about the topic can be found in the ASE documentation in the chapter "Remote Dump Host Control".
The SAP installer presents the message "Unable to set up backup server" and the installer's trace file sapinst_dev.log includes entries like:
Building Backup Server '<SID>_BS':
Writing entry into directory services...
Directory services entry complete.
Writing RUN_<SID>_BS file...
RUN_POP_BS file complete.
Starting server...
Unable to boot server '<SID>_BS'.
Task failed
Server '<SID>_BS' was not created.
The corresponding backup server log file /sybase/<SID>/ASE-16_0/install/<SID>_BS.log includes traces like:
Started listener at tcp <IP ADDRESS 1> 4902
Backup Server: 2.24.2.1: The host '<IP ADDRESS 2>' is not authorized to connect to this Backup Server.
Backup Server: 5.40.2.1: Login host authentication has failed.
The root cause of the issue is that backup server does not recognize that IP ADDRESS 1 and IP ADDRESS 2 are the same host and requires a "remote host configuration". Further information about the topic can be found in the ASE documentation in the chapter "Remote Dump Host Control".
Solution
Create text file /sybase/<SID>/hosts.allow with the IP addresses from the backup server log with the following line:
<IP ADDRESS 1> <IP ADDRESS 2>
Set the file ownership to syb<sid>:sapsys with permission 640.
Terminate the running backup server with the OS command 'kill'.
Press the 'retry' button in the SAP installer dialog screen.
<IP ADDRESS 1> <IP ADDRESS 2>
Set the file ownership to syb<sid>:sapsys with permission 640.
Terminate the running backup server with the OS command 'kill'.
Press the 'retry' button in the SAP installer dialog screen.
Saturday, 2 May 2020
ST22 - ABAP dumps
ST22 - ABAP Dump Analysis
References for few real time issue and their solutions:
References for few real time issue and their solutions:
- Import_buffer error in ST22: issue:
Solution:
Issue because of transport request not imported correctly.
need to change that ABAP program (which is captured in transport request) manually in QAS (target ) server with the help of ABAPer. Client open in SCC4 may required here, in case the target server is Quality or Production for adoption of changes in target server directly.
ST11 - Developer Trace
- ST11 - developer trace
- Display all wp, dev_rfc traces,dev_disp,dev_w0...dev_w15 kind of traces available here.
- double click on any on wp, for analysis. if required.
SM50--> Administration(in menu) --> Traces -->Dispatcher -->chance trace level-->Value.
Other way using to check this traces:
Only in case server is MS SQL DB based on windows server, open SAP MMC --> Right click on system SID,where you need the developer system-->Choose Developer trace option -->Developer trace can monitor here.
in case Linux based system or for any other OS based system use, SAP MC -->For more info regarding SAP MC
Subscribe to:
Posts (Atom)