Thursday, 7 December 2023

Kernel profile in SAP HANA

Activity 1:


The following statement starts profiling for the SYSTEM user, sets a memory limit of 1 GB, and sets a sampling interval of 2 milliseconds.




ALTER SYSTEM START KERNEL PROFILER USER SYSTEM MEMORY LIMIT 1073741824 SAMPLING INTERVAL 2; 

Activity 2:

The following statement stops profiling, saves the data for the specified call stacks into the trace directory, and then frees the memory that was used for the profiling data.

ALTER SYSTEM SAVE KERNEL PROFILER FOR CALLSTACK 'Execution::JobExecWatchdog::run','System::ProcessorInfo::getCurrentProcessorIndex' INTO CPU FILE 'cpu.dot' WAIT FILE 'wait.dot';

Activity 3:


ALTER SYSTEM STOP KERNEL PROFILER

Stops the Kernel Profiler (but does not free up the allocated memory).

<stop_profiler> ::= STOP KERNEL PROFILER [ <location> ]


Activity 4:

The following statement clears all previously collected Kernel Profiler data.


ALTER SYSTEM CLEAR KERNEL PROFILER;

Wednesday, 6 December 2023

All about HDBUSERSTORE LIST


ACTIVITY1:



hdbuserstore SET <HANASITTER1KEY> dc1-saphanad01:30044 HANASITTER1 Naveen@8977779532


<HANASITTER1KEY> - hana user store key name

dc1-saphanad01 - hostname

30044 - SQL port of tenent 

HANASITTER1 - User

Naveen@8977779532 - Password



example2:

Create a key:

hdbuserstore -i SET <key_name> <hostname>:<port>@<DB Name> <Username> <Password>

where,
key_name - Name you want to give to your key
hostname - Hostname on which DB is installed (can be localhost or ip or virtual host)
port - Port number to be used to connect to DB
Username - User for which you are creating the key
Password - Password of the above existing user
DB Name - Database name in MDC environment
Combination of hostname:port is known as environment and represented as env

Note: It's not mandatory to provide DB name while generating key but it's a good practice to prevent conflicts.

Example:
hdbuserstore -i SET X vhabcxyzdb:30213@SYSTEMDB SYSTEM hello@123





ACTIVITY2:


To delete HDBUSERSTORE KEY

hdbuserstore DELETE HANASITTER1KEY



ACTIVITY3:

List all keys

hdbuserstore list



ACTIVITY4:

Using key to login

hdbsql -U <key_name>
You can know about other possible operations using hdbuserstore -h command.



ACTIVITY5:

ListFromDir <DIR>

        List entries from a store in <DIR>.

        <DIR>       store directory from which entries to be read


ACTIVITY6:

  ChangeKey

        Generate new encryption key and encrypt passwords again.


ACTIVITY 7:

AddFromDir <DIR>

        Add entries from a store in <DIR> without overwriting existing keys.

        <DIR>       store directory from which entries to be read


-----------------------------------------------------------------------------------------------------------------------




ACTIVITY 8:



hdbuserstore -h

Usage: hdbuserstore [options] command [arguments]


Options:

  -u <USER>       perform operation for other operating system user

  -v              verbose mode, print operations done

  -i              interactive mode, ask for missing values

  -h              this help message

  -H <HOST>       assume host name <HOST>

Commands (the command name is case insensitive):

  Help

        Print help message.

  Set <KEY> <ENV>[@<DATABASE>] <USERNAME> <PASSWORD>

        Add or update a profile entry.

        <KEY>       entry key name

        <ENV>       database location (host:port)

        <USERNAME>  user name

        <PASSWORD>  password

        <DATABASE>  database name in MDC environment

  AddFromDir <DIR>

        Add entries from a store in <DIR> without overwriting existing keys.

        <DIR>       store directory from which entries to be read

  Delete <KEY>

        Delete entry with key <KEY>.

  List [<KEY> ...]

        List entries of store. The passwords are not shown.

  ListFromDir <DIR>

        List entries from a store in <DIR>.

        <DIR>       store directory from which entries to be read

  ChangeKey

        Generate new encryption key and encrypt passwords again.

---------------------------------------------------------------------------------------------------------------



Sunday, 3 December 2023

Host, Database,system,default layers, Parameters configuration in SAP HANA

 Host layer: it has the highest priority when a parameter is configured in the host layer have more      importance then Database,system,default layers.
Database layer: parameters defined here are specific to tenant database, when a parameter is configured  in the Database layer have more importance then system,default layers.
System layer:parameters defined here are for all tenant databases in one system DB, when a parameter is configured  in the system layer have more importance then default layers.

Default layer: it have least priority among all the 4 parameter layers.

Friday, 1 December 2023

run time dump file creation in SAP HANA

switch to <SID>ADM


cdpy 


h2dadm@dc1-saphanad01:/usr/sap/H2D/HDB00/exe/python_support> python fullSystemInfoDump.py --rtedump --tenant=DB9  --sets=1 --interval=1 --services=indexserver

System Info Dump created 2023-12-01 18:51:45 (UTC) with script version 2.40

Called with command line options: --rtedump --tenant=DB9 --sets=1 --interval=1 --services=indexserver


FSID of the last 7 day(s) (from date: 2023-11-24, to date: 2023-12-01)


Writing to file /usr/sap/H2D/SYS/global/sapcontrol/snapshots/fullsysteminfodump_H2D_DB9_dc1-saphanad01_2023_12_01_18_51_45.zip



----- 2023-12-01 10:51:45 Exporting runtime dump information -----

Collecting runtime dump for IndexServer on host dc1-saphanad01:30043 ...

section 3/32 done: STACK_SHORT - Short call stacks and pending exceptions of all threads

section 4/32 done: CPUINFO - CPU info

section 5/32 done: MEMMAP - Memory map

section 11/32 done: STATISTICS - Statistics data

section 12/32 done: PROCESS_INFO - Process Info

section 16/32 done: INDEXMANAGER_WAITGRAPH - Wait graph for index handles

section 17/32 done: INDEXMANAGER_STATE - IndexManager internal state

section 22/32 done: THREADS - Running threads

section 25/32 done: REGISTERED_STACK_OBJECTS - Objects on stack registered to be dumped

section 27/32 done: IPMM_MEMORY - IPMM information

section 28/32 done: MEMORY_ACCOUNTING - Memory accounting

section 30/32 done: OS_MEMORY - Operating system information about memory

done.




System information written to file /usr/sap/H2D/SYS/global/sapcontrol/snapshots/fullsysteminfodump_H2D_DB9_dc1-saphanad01_2023_12_01_18_51_45.zip

Dump created between 2023-12-01 18:51:45 and 2023-12-01 18:51:49 (UTC). Duration: 0:00:03.638853


Full System Info Dump done.


fullsysteminfodump_H2D_DB9_dc1-saphanad01_2023_12_01_18_51_45.zip

[OK]

h2dadm@dc1-saphanad01:/usr/sap/H2D/HDB00/exe/python_support> cd /usr/sap/H2D/SYS/global/sapcontrol/snapshots/

h2dadm@dc1-saphanad01:/usr/sap/H2D/SYS/global/sapcontrol/snapshots> ls -latr | grep 'fullsystem*'

total 19644

-rw-r--r-- 1 h2dadm sapsys 5417780 Sep 30  2021 fullsysteminfodump_H2D_QE7_dc1-saphanad01_2021_10_01_01_05_16.zip

drwxr-xr-x 3 h2dadm sapsys      72 Nov  5 02:19 ..

-rw-r--r-- 1 h2dadm sapsys 3098607 Dec  1 09:43 fullsysteminfodump_H2D_DB9_dc1-saphanad01_2023_12_01_17_43_53.zip

-rw-r--r-- 1 h2dadm sapsys 6173857 Dec  1 10:10 fullsysteminfodump_H2D_DB9_dc1-saphanad01_2023_12_01_18_09_25.zip

-rw-r--r-- 1 h2dadm sapsys 3082785 Dec  1 10:14 fullsysteminfodump_H2D_DB9_dc1-saphanad01_2023_12_01_18_14_43.zip

-rw-r----- 1 h2dadm sapsys 2327898 Dec  1 10:51 fullsysteminfodump_H2D_DB9_dc1-saphanad01_2023_12_01_18_51_45.zip

drwxrwx--- 2 h2dadm sapsys    4096 Dec  1 10:51 .

h2dadm@dc1-saphanad01:/usr/sap/H2D/SYS/global/sapcontrol/snapshots>


Thursday, 30 November 2023

SQL command for CPU utilization in HANA DB

 SQL command for CPU utilization in HANA DB

---------------------------------------------------------------------------------------------------------------------------------------------------------

select top 20 * from M_EXPENSIVE_STATEMENTS where start_time between '2023-11-29 00:48' and '2023-11-29 01:01' order by CPU_TIME desc

--------------------------------------------------------------------------------------------------------------------------------------------------------------

Managing Large tables with partitioning

 when monitoring the system,you can sometimes see tables that have grown so large in SAP HANA that it makes sense to split them "horizontally'' into smaller partitions.By default,you can partition column tables in SAP HANA ,which is ideal for managing high data volumns. The SQL statements - or any data manipulation language(DML) statements - do not need to know that the data is partitioned.Instead,SAP HANA automatically manages the partitions behind the scenes. Thus,SAP HANA simplifies access and frontend development and gives administrators a key tool to manage disks, memory, and large column stores.


In a distributed (scale-out) SAP HANA system, you can also place the partitions on different nodes and thereby increase performance even more because more processors are available for available for your users. in fact, this may become the standard deployment method for extremely large systems with tens of thousands of users.

Currently,SAP HANA suppots up to 2 billion rows in a single column table.in a partitioned schema, you can now have 2 billion rows per partition with virtually no limit on how many partitions you can add.you are limited, not by a database limitation, but by hardware and landscape architecture issues. you can create partitions in SAP HANA in three different ways from an adninistration standpoint: with a range,with a hash and by round-robin method. while more complex schemas are possible with higher- level options. Lets take a look at these fundamental partitioning choices.


Partitioning column tables using a Range

if you know your data really well, you can partition the data by any range in your table.Although the most common is date, you can also partition by material numbers, postal codes, customer numbers, and so on.

Partitioning by date makes sense if you want to increase query speed and keep current data on a single node. Partitioning by customer number makes sense if you are trying to increase the speed of delta merges because multiple nodes can be used at the same time during data loads. you'll have to spend some time thinking of what benefitsyou want to achieve before undertaking any partitioning scheme.Note that the maintenance of range partitions is somewhat higher than the other options because you have to keep adding new partitions as data outside the existing partitions emerge (e.g., next years data if you partition by year now).


partitioning is done using SQL with the syntax seen in listing.
------------------------------------------------------------------------------------------------------------------------
CREATE COLUMN TABLE SALES (sales_order INT ,  customer_number INT , quantity INT , PRIMARY KEY (sales_order))
PARTITION BY RANGE ( sales_order)
(PARTITION 1 <= values < 100000000,
PARTITION 100000000 <= values < 200000000,

PARTITION OTHERS)
--------------------------------------------------------------------------------------------------------------------------




This syntax creates a table with three partitions. The first two have 100 million rows each, and the last partition has all the other records. However, you must follow some basic rules: first, the field you are partitioning on has to be part of the primary key (i.e., sales_order). Second, the field has to be defined as a string, a date, or an integer.Finally, you can only partition column stores, not row stores by ranges.


Partitioning column tables using a Hash:

Unlike partitioning by ranges, partitioning column stores by a hash doesn't require any in-depth knowledge of the data. Instead, partitions are created by an internal algorithm applied to one or more fields in the database by the systemitself. This algorithm is known as a hash. Records are then assigned
to the required partitions based on this internal hash number. The partitions are created in SQL with the following syntax:

--------------------------------------------------------------------------------------------------------------------

CREATE COLUMN TABLE SALES ( sales_order INT , cutomer_number INT , quantity INT , PRIMARY KEY ( sales_order, customer_number)) PARTITION BY HASH (sales_order, customer_number)
PARTITIONS 6

---------------------------------------------------------------------------------------------------------------------

This example creates six partitions by sales orders and customer numbers. some rules apply here as well: if the table has a primary key, the primary key  must be included in the hash. if you add more than one column, and your table has a primary key, all fields used to partition must be part of the primary key also. if you leave off the number 6, the system will determine the optional number of partitions itself based on your configuration. Therefore, using PARTITIONS without a number is the recommended setting for most hash partitions.


Partitioning column Tables using the Round-Robin method:


In a round-robin partition, the system assigns records to partitions on a rotating basis. The round-robin method makes efficient assignments and requires no knowledge of the data. However, removing partitions in the future will be more difficult because both new and old data will be in the same partitions.


The Partitions are created in SQL with the following syntax:

-----------------------------------------------------------------------------------------------------------------------

CREATE COLUMN TABLE SALES (sales_order INT , customer_number INT , quantity INT)
PARTITION BY ROUNDROBIN
PARTITIONS 6


------------------------------------------------------------------------------------------------------------------------

This example creates six partitions and assigns records on a rotating basis. if you change the last statement to PARTITIONS GET_NUM_SERVERS( ), the system will assign the optimal number of partitions based on your system landscape.The only requirement for the round-robin method is that the table cannot contain a primary key.



Sunday, 26 November 2023

How To Backup Windows Server 2016

 How To Backup Windows Server 2016


Here we’ll show you how to install the Windows Server Backup feature in Windows Server 2016 with PowerShell and then configure basic server backups.

We’ll also take a quick look at how the backup files are stored and see if they are at all encrypted or not.

In order to perform simple backup and restores out of the box, Windows Server 2016 provides the Windows Server Backup feature. While this does work it is fairly basic and lacks many useful features, if you have many critical servers to backup I highly recommend that you look at something else such as Microsoft’s Data Protection Manager (DPM) or a third party backup solution from some other vendor.

An example of this is that when you backup over the network to an external share with Windows Server Backup you can only store one restore point at a time, any further backups will overwrite existing ones which isn’t very useful if you actually need some sort of retention. The backups are also managed on a per server basis which makes them difficult to maintain and manage.

Install Windows Server Backup Feature

Before we can actually configure the backups, we must first install the Windows Server Backup feature. This can be done through the graphical user interface, however it’s just faster to simply use PowerShell.

First we’ll use the Get-WindowsFeature cmdlet to search for available features that contain the string ‘*Backup*’ in the Name field.

PS C:\> Get-WindowsFeature -Name *Backup*

 


As we can see Windows-Server-Backup is available for install but not currently installed.

Next we use the Install-WindowsFeature cmdlet to install the Windows-Server-Backup feature, as shown below.

PS C:\> Install-WindowsFeature -Name Windows-Server-Backup

 


Once complete we should see that the installation has completed successfully, no reboot is required for this feature, we can begin using it straight away.

Configure Backups

Now that we have installed the Windows Server Backup feature, we can begin to configure our server backups. To begin we’re going to open the Windows Server Backup console from the Tools menu in Server Manager as shown below.

 


From the wbadmin window that opens up, select “Local Backup” from the menu on the left. We will see that there is a warning noting that no backup has been configured for this computer, which is currently expected. We can either configure a backup schedule which will perform the backup as we define, or we can perform a once off backup. Here we’ll setup a backup schedule, by clicking “Backup Schedule” on the right.

 


This will open up a Getting Started window advising that we can use this wizard to configure a regular backup schedule for the full server, system state, selected files, folders or entire volumes – click Next to proceed.

 


On the next Select Backup Configuration window we can select if we want to perform a full server backup which is the recommended and default action, or we can optionally select custom to pick specific volumes or files that we want to backup rather than the full server. In this example we will be configuring a full server backup, however you can change this to suit your requirements.

 


The next window allows us to specify the time the backup should run. We can set the backup to run once daily at a specified time, or we can optionally have it run multiple times per day by selecting the more than once a day option and adding the times the backup should run to the right side.

 


Next we will be asked to specify the destination of our backup data. The recommended option is to store the data on a separate disk that is attached to the server, however we can also change this to backup to another volume, or a network share. In this example I’ll be using a network share as I have one available, however note that there is a limitation of this, we will only be able to store one backup point at a time as new backups will overwrite the existing one. This limitation does not exist when you backup to another disk or volume.

We will now specify the network location, here I pick a file share that is available on the local network and click Next to continue. If you selected a disk or volume destination rather than network, you would instead be asked to pick that disk or volume here.

 


Finally we are presented with a confirmation screen which will summarize our options, click the Finish button to complete the process and accept this, or otherwise go back and make any changes as needed. The summary notes that as we are performing a full backup, we will be able to perform bare metal recovery of the entire system which is fairly powerful.

 







That’s it, the backup should automatically start at the time specified. We can manually initiate it by going back to the Windows Server Backup window and selecting “Backup Once”. From here we are given the option to create a once off backup based off of our scheduled backup, so all of the same settings will be used but we will be running the backup now rather than at the scheduled time.

Are Windows Server Backups Encrypted?

This is a common question that I’ve seen asked a few times, so I thought I’d take the opportunity to answer it here. No, by default backups in Windows Server 2016 (and previous versions for that matter) are not encrypted. We can confirm this by simply browsing to the location that we have specified to backup the data to and look through it. Primarily a .vhdx file is created for the C:\ drive of the server, which we can easily mount through disk manager, assign a drive letter to and then browse through the files and folders.

To encrypt the backup files we could setup Bitlocker on the disks where the backups are being stored, however note that this only protects the data at rest. If the data can be accessed while the drive is available for the backup to work, it could also be read by any potential attacker during this time.

Summary

We have shown you how to install the Windows Server Backup feature in Windows Server 2016 using PowerShell, and then configure a basic backup schedule to a network share.

We also then confirmed that the backup files are not encrypted, so additional steps should be taken to protect them.


Saturday, 25 November 2023

Difference between Disk Cleanup and Disk Defragmenter

 Difference between Disk Cleanup and Disk Defragmenter


Disk Cleanup helps users remove unnecessary files on the computer that may be taking up space on the hard drive. Disk Defragmenter is a utility offered in Microsoft Windows that rearranges files on a disk to occupy continuous storage space.

The longer a computer is in use, the slower its gets. This is due to the fact that the computer saves files in a manner that different from how we save it. Files are saved in a fragmented form over a number of places, so when a person opens the files the hard drive has to search from a variety of places to find the actual file. Now, even if programs are removed to make up space for other programs, they do not cleanup very well after themselves, leaving bits and pieces of files in various places. As time goes by all and the files start blocking the system, the hard drive and RAM slows down. In order to cope with such problems, Microsoft released two programs: Disk Cleanup and Disk Defragmenter.







Disk Cleanup helps users remove unnecessary files on the computer that may be taking up space on the hard drive. Microsoft Windows started offering this computer maintenance utility for removing such files and helping the computer run faster. Disk Cleanup analyzes the hard drive for unnecessary files that are no longer being used and then removes those files. During its analysis disk Cleanup targets different files such as compression of old files, temporary internet files, temporary windows files, downloaded program files, recycle bin, removal of unused applications or optional windows components, setup log files and off-line files. Aside from removing files, disk cleanup can also provide the option of compressing old files that have not been opened over a period. The compression allows for more space to become freed up. However, if the user wishes to access a compressed file, the access time will be increased depending on the system. In addition to the given categories, the system features a More Options tab that offers additional options for freeing up hardware drive space by removing Windows components, installed programs, etc.

Disk Cleanup can be accessed by click on the Start button and search box putting Disk Cleanup, then picking the option from a list of results. It will then provide a list of hard drives, from which the user can pick the hard drive he wants to clean. In the Disk Cleanup dialog box, on the Disk Cleanup tab, the user can check boxes of files that he wishes to clean up and can then start the cleanup process by clicking OK.



Disk Defragmenter is a utility offered in Microsoft Windows that rearranges files on a disk to occupy continuous storage space. Windows website defines ‘disk defragmenter’ as “the process of consolidating fragmented files on your computer's hard disk.” Fragmentation happens on a computer disk over time as the user saves, changes and deletes files from the system. The changes and alternations to a file are often saved to a different location on the drive. When opening files that are fragmented over a number of locations, the system slows down as the computer has to look into many places to recover the files. The defragmenter helps the system arrange files in a continuous order to save up more room and make it easier for the computer to look for files. It also helps reduce the start up time of the computer.

The IBM PC DOS operating system that was shipped with the IBM Personal Computers in 1982 had included a Disk Volume Organization Optimizer for defragmenting the 5¼-inch floppy disks that were used by the machines. At that time Microsoft’s MS-DOS did not defragment hard disks. The need for defragmenter caused a number of third party companies to start offering their products. While MS-DOS 6.0 introduced Defrag capability, the Windows NT did not offer any similar utility, promoting its customers to use Symantec’s defragmenter. Proper Disk defragmenter utilities were offered as part of Windows 95, Windows 98 and Windows ME. The latest versions of Windows often have a timed defragmenter that runs on a set schedule so that the user does not have to manually defragment the system.

Both of these maintenance utilities are effective tools for reducing system lags and allowing the system to free up more space so that it can run more smoothly. However, remember that these utilities must be run periodically to keep the computer from lagging as these are temporary and more files often start building up in the system

Friday, 24 November 2023

Coping backup from source to target screen

Ex: Copy of production backup from production server to quality server using SCP command

login to the prodution server in the putty:

go to /hana/shared/backups/ - where the backup file is located

scp <file_name> <user_name_in the_target_server>@<ip address or hostname of the target server>:/hana/shared/backups


Hit enter

Friday, 17 November 2023

TRUNCATE TABLE A;

 TRUNCATE TABLE A;

Syntax for creating new column store table

Ex1:



 Create column Table Test_ColumnTB1 (

   Cust_ID INTEGER,
   Cust_NAME VARCHAR(10),
   PRIMARY KEY (Cust_ID)
);


ex 2

CREATE COLUMN Table XJRM63.Testfordatamasking ( "EMP_ID" varchar(10), "Emp_name" varchar (20), "Emp_salary" nvarchar (11)) ;

Create ROW TABLE:

 CREATE ROW TABLE A (A INT PRIMARY KEY, B INT);

Monitoring memory Usage of table

 Memory is SAP HANA is used for a variety of purposes.

 First, the operating systems and support files are using memory.

second, SAP HANA has its own code and stack of program files that alsoconsume memory.

Third,memeory is consumed by the column and row store where data is stored.

Finally, SAP HANA needs memory for its working space where computations occur, temporary results are stored, and shared user memory consumption occurs.

In short, physical memory is the maximum memory available, some of which is used by the operating systems and files.The allocated limit is what is "given" to SAP HANA, and the remaining memory is free space.These allocations are important because you're using preallocated memory pools, and you can't rely on Linux operating system information for memory management.However,you can get this information in many other ways.

In addition to the memory consumption and memory tracking information in the preceding sections, you can also obtain memory information using SQL statements. In the M_SERVICE_MEMORY view, you can  execute a set of predefined SQL statements provided by SAP, including those shown below:


To access these statements, open the SQL console from the context menu of the VIEWS folder in  the SYS schema.


Total Memory Used by All Row Tables:
Select round (sum(USED_FIXED_PART_SIZE +USED_VARIABLE_PART_SIZE)/1024/1024) AS "ROW Tables MB" FROM M_RS_TABLES;


Total Memory used by All Column Tables:

Select round (sum(MEMORY_SIZE_IN_TOTAL)/1024/1024) AS "Column Tables MB" FROM M_CS_TABLES;

Total Memory Used by all column Tables in a Schema

Select SCHEMA_NAME AS "Schema", round (sum(MEMORY_SIZE_IN_TOTAL) /1024/1024) AS "MB" FROM M_CS_TABLES GROUP BY SCHEMA_NAME ORDER BY "MB" DESC;


For column tables, SAP HANA sometimes unloads infrequently used columns from memory to free up space when the allocated memory threshold is near the used capacity.So, when memory consumption is estimated, it's important to look at all column tables, not just those currently loaded in memory. You can see all column table sizes in a schema by using the SAP provided SQL statement:


Total Memory by column Tables in a schema:

Select TABLE_NAME AS "Table", round (MEMORY_SIZE_IN_TOTAL/1024/1024, 2)as "MB" FROM M_CS_TABLES WHERE SCHEMA_NAME = 'SYSTEM' ORDER BY "MB" DESC;

Select TABLE_NAME AS "Table", round (MEMORY_SIZE_IN_TOTAL/1024/1024, 2)as "MB" FROM M_CS_TABLES WHERE SCHEMA_NAME = 'XJRM63' ORDER BY "MB" DESC;



Thursday, 16 November 2023

Exporting and importing table data and definations

 Exporting and importing table data and definations:

Export of table:

you can export and import table data between systems and hosts. 

* for row- based data tables, you can export the data in comma-delimited format(CSV).
* for column - based data tables, you can also choose a binary format.
    Binary formats tend to be smaller in size and also faster, but this option should only be used if you're        moving data tables between SAP HANA systems (eg: from production to a development box).

To export tables, go to the systems list in SAP HANA studio, navigate to the table(s) you want data from and select EXPORT from the right- click context menu for the table.This will take you to an export wizard (see below screenshot).



that will guide you through the process of selecting tables and available formats.
you can also choose to export only the table definations(catalog) or include the data as well (catalog and data).

select where you want to store the file and how many thrreads you want to use for the export. The more threads you select, the faster the export will execute. However, selecting too many threads may impact system performance for others, so keep it small (one to five) unless you do this on a large system, during off times,or on the systems with little usage.The export  wizard will save the file where you specifiied in the format selected.



Import of table: 


To import table(s) into SAP HANA from the file, simply right- click on the catalog in the systems list in the SAP HANA studio.
Select IMPORT from the context menu.
After entering the file path where the file is located, you can select the table you want.you may have to type in the names of the tables if you're using a remote host.The last import steps tell the system the format of the file (CSV or binary) and enter the threads to be used. The table is now automatically imported in to your system and can be seen in the systems.


Friday, 29 September 2023

SM51 - instances of SAP system - monitoring

 1. Access T-code:SM51.

Check:
1. Check the Application server instance - state as active.
2. During any maintenance activity, we can make state as - Deactivate.
       Goto--> Administration -->state -->Deactivate.

3.after completing maintenance activity, we can make state as - Activate.

       Goto--> Administration -->state -->Activate.

4.To check list of Users in all the clients(100,200 etc.,)
       Goto --> Users.
   Analysis the user load in all the clients.

5.This option lets you make the changes made in the OS hosts and services tables in a running.
    SAP system take effect by making the desired changes at the operating system level and then

      instead of restarting, resetting the host name buffer for the following components.


     SM51 --> Go to -->Host Name Buffer --> Reset --> Entire System.




Tuesday, 26 September 2023

Dialog work process status in sm50

 

Waiting: The process is waiting "Available" to serve the user..


Running: The process is executing the task.


Sleep mode: Waiting for resources( RFC problems).


ON hold: The user request is on hold by process for waiting certain resources on the other systems(RFC CPIC).


PRIV Mode: The process goes in to heap mode, it will be completed only after the task completion/ Timeout.


Stopped: The process is stopped due to an error.


Shutdown: The process is killed/shutdown but restart mode set to NO.


Monday, 6 March 2023

List of freelancing work

In case any one want below freelancing work. contact me on - whatsapp link

VM services:

AWS: create
           /maintain
          /modify the VMs 

Azure: create
             /maintain
            /modify the VMs 




SAP BASIS:

1.User administration in ABAP server/ CUA configuration /EULA support. 
2.client administration - 1. Local 2. export & import 3. Remote. 
2.IDES installation. 
3.SAP ABAP server installations.
4.System export and import - (on premise to cloud). 
5.SUM tool upgrades. 
6. SUM with DMO
7. SYSTEM conversion from ECC to S/4 HANA.
8.Kernel upgrades.
9. SAP support portal administration.

SAP HANA:

1.Provide Cloud infra for SAP HANA servers.
2.SAP HANA Installations.
3.SAP HANA upgrades.
4.SAP HANA Mini checks.
5.HADR configuration / monitoring.

Video classes

For all the video class, please contact me on - whatsapp link

1. Udemy - AWS Certified Solutions Architect - Associate [Latest Exam]: - RS  999/-

     index of course: Index    
2.     

Udemy - AWS Certified Solutions Architect - Associate [Latest Exam]

 Udemy - AWS Certified Solutions Architect - Associate [Latest Exam]



1. Course Introduction

1. Introduction

2. Recommended Study Plan for best results & Video Speed Control

3. Getting Organized & Summary PDF Download

3.1 Summary Final AWS CSAA Review Notes v1.0 FEB 2020


2. Introduction to Cloud Computing and AWS Global Infrastructure

1. On Premise Data Center , What is Cloud Computing

2. Cloud Computing Offerings - Public, Private and Hybrid Cloud

3. AWS as the Public Cloud IaaS Leader

4. AWS Global Infrastructure - Regions and Availability Zones

5. Architect Profession - Info you need to know


3. Getting Started with AWS - Free Tier Account Setup & AWS IAM Foundation

1. AWS Identity and Access Management 101

2. Hands on Lab - Creating Users and Account Password Policy

3. Hands on Lab - Protecting the Root user using Multi Factor Authentication

4. AWS Free Tier - Know what you are getting for free with your new account

5. Hands on Lab - AWS Billing Alerts and Cost Budgets


4. AWS Servcies Foundation - Introduction to  EC2, EBS, IAM, S3

1. Elastic Compute Cloud (EC2) Foundation

2. Hands On Lab - Create your first EC2 instance


5. Core Knowledge - VPC,Sec Group,N ACL,Elastic IP,NAT,VPN,VPC Peering& D. Connect

1. Core Knowledge - VPC introduction

2. Core Knowledge - VPC Components - Implied Router and Route Tables

3. Core Knowledge -VPC Components - IP Addressing - Internet Gateway - Subnet Types

4. DEMO - VPC Overview Lab # 1

5. DEMO - VPC Overview Lab # 2

6. Core Knowledge - VPC Components - VPC Types and Introduction to Security Groups

7. DEMO VPC Lab # 3 - Creating A Custom VPC

8. DEMO - VPC Lab # 4 - Security Groups

9. Core Knowledge - VPC Components - Security Groups Mastery

10. Core Knowledge - VPC Components - Network Access Control Lists (N ACLs)

11. DEMO VPC Lab # 5 - Network Access Control Lists (NACLs)

12. Core Knowledge -  VPC - Network ACLs and Security Groups

13. Core Knowledge - VPC - Network ACLs vs. Security Groups

14. Core Knowledge - VPC Security Scenarios - Applying Security Group and N ACLs

15. Core Knowledge - VPC - Network Address Translation - NAT

16. Core Knowledge - VPC Peering

17. DEMO - VPC Lab # 7 - Working with VPC Peering Across AccountsRegions

18. Core Knowledge - Transit Gateway

19. Core Knowledge - VPC Virtual Private Networks (VPN)

20. Core Knowledge - VPC Direct Connect

21. Core Knowledge - Direct Connect Routing and Link Aggregation Groups (LAGs)

22. Hybrid Connectivity Use Cases   Scenarios

23. Core Knowledge - AWS Direct Connect Gateway

24. Core Knowledge - AWS Direct Connect Limits

25. AWS VPC - VPC Endpoints - Gateway Endpoint

26. AWS VPC - Hands On Lab - VPC Gateway Endpoint

27. AWS VPC - VPC Interface Endpoint

28. Hands On Labs - VPC Inteface Endpoints

29. Egress Only Internet Gateway (for IPv6)

30. Hands On Labs - Egress-only Internet Gateway

31. Core Knowledge - VPC Flow log and DHCP Option Sets

32. Hands On Labs - VPC Flowlogs

33. VPC Quiz 1

34. VPC Quiz 2

35. VPC Quiz 3

36. VPC Quiz 4

37. VPC Quiz 5

38. VPC Quiz  6


6. Core Knowledge - Master VPC (& its Components) Scenario-based Practice Questions

1. Core Knowledge - AWS VPC Set of Questions #1

2. Core Knowledge - AWS VPC Set of Questions #2

3. Core Knowledge - AWS VPC Set of Questions # 3

4. Core Knowledge - AWS VPC Set of Questions #4

5. Core Knowledge - AWS VPC Set of Questions #5

6. Core Knowledge - AWS VPC Set of Questions # 6

7. Core Knowledge - AWS VPC Set of Questions #8

8. Core Knowledge - AWS VPC Set of Questions #9

9. Core Knowledge - AWS VPC Set of Questions #10

10. Core Knowledge - AWS VPC Set of Questions # 11

11. Core Knowledge - AWS VPC Set of Questions #12

12. Core Knowledge - AWS VPC Set of Questions #13

13. Core Knowledge - AWS VPC Set of Questions #15


7. Core Knowledge - Master Elastic Compute Cloud (EC2) Exam Required Knowledge


1. Core Knowledge - Elastic Block Store Types

2. DEMO - EC2 LAB - Creating an EC2 instance - Part 1

3. DEMO - EC2 LAB - Creating an EC2 instance - Part 2

4. DEMO - EC2 LAB - Creating an EC2 instance - Part 3

5. Hands On Labs - EC2 LAB - Encrypting the Root Volume of an EC2 instance

6. Hands On Labs -  LinuxMac users - SSH to an EC2 instance

7. Hands On Labs - Windows PC - SSH to A linux EC2 instance Using Putty

8. Hands On Labs -  EC2 Instance States

9. Hands On Labs - Instance Store-backed EC2 instance launch

10. EC2 Enhanced Networking and Placement Groups

11. EC2 Placement Groups

12. EC2 Status Checks and Monitoring

13. EC2 Instance States

14. EC2 Instance Termination and Termination Protection

15. EC2 Instance Metadata and User Data

16. Hands On Labs - EC2 Instance User Data

17. Hands on Labs - EC2 instance metadata

18. Migration tofrom AWS EC2 & VM ImportExport

19. Bastion Hosts

20. EC2 Instance Launch Modes (Purchase Options) - Reserved & Scheduled Instances

21. EC2 Instance Launch Modes (Purchase Options) Part 2

22. VPC and EC2 Instance Tenancy Attribute

23. Elastic Compute Cloud - Elastic Network Interfaces (ENIs)

24. Elastic Network Interface (ENI) - IP Addressing

25. NAT instance SourceDestination Check

26. Public IPv4 address auto assignment

27. DEMO - TCPIP Packet Walkthrough - Deep Dive

28. DEMO LAB - EC2 - VPC Combined Project Lab - Part 1

29. DEMO LAB - EC2 - VPC Combined Project Lab - Part 2

30. DEMO LAB - EC2 - VPC Combined Project Lab - Part 3

31. DEMO LAB - EC2 - VPC Combined Project Lab - Part 4

32. DEMO LAB - EC2 - VPC Combined Project Lab - Part 5

33. DEMO - EC2 - NAT Instance Project - Part # 1

34. DEMO - EC2 Labs - NAT Instance Project - Part # 2

35. DEMO - EC2 Labs - NAT Instance Project & NAT Gateway - Part # 3

36. Troubleshooting EC2

37. EC2 Quiz 1 - 10 Questions

38. EC2 Quiz 2 - 10 Questions

39. EC2 Quiz 3 - 12 Questions


8. Core Knowledge - Master AWS EC2  - Exam Scenario-based Questions

1. Core Knowledge - EC2 Exam Scenario-based Set of Questions #1

2. Core Knowledge - EC2 Exam Scenario-based Set of Questions #2

3. Core Knowledge - EC2 Exam Scenario-based Set of Questions # 3

4. Core Knowledge - EC2 Exam Scenario-based Set of Questions #4

5. Core Knowledge - EC2 Exam Scenario-based Set of Questions #5

6. Core Knowledge - AWS EC2 Exam Scenario-based Set of Questions #6

7. Core Knowledge - AWS EC2 Exam Scenario-based Set of Questions #7

8. Core Knowledge - AWS EC2 Exam Scenario-based Set of Questions #8

9. Core Knowledge - AWS EC2 Exam Scenario-based Set of Questions #9

10. Core Knowledge - AWS EC2 Exam Scenario-based Set of Questions #10

11. Core Knowledge - AWS EC2 Exam Scenario-based Set of Questions #11

12. Core Knowledge - AWS EC2 Exam Scenario-based Set of Questions #12

13. Core Knowledge - AWS EC2 Exam Scenario-based Set of Questions #13


9. Introduction to Encryption and AWS KMS

1. Introduction to Encryption and Cloud HSM

2. AWS Key Management Service (KMS) - Introduction

3. AWS KMS - Customer Master Keys (CMK)


10. Core Knowledge - Master AWS's Elastic Block Store (EBS) Exam Required Knowledge

1. EBS - Block Store Types and IOPS Performance

2. EBS Snapshots

3. EBS Snapshots - 2

4. Hands On Lab - EBS Volumes and Snapshots

5. EBS Encryption 1

6. EBS Encryption 2 - Changing the Encryption state of an EBS volume

7. Hands On Lab - Root EBS Volume Encryption

8. EBS - Sharing EBS Snapshots

9. EBS - Copying EBS snapshots

10. Hands On Lab - EBS Encryption and Sharing Snapshots

11. EBS - Creating and Registering AMIs from Block Store Volumes

12. EBS -  Creating AMIs from EBS-Backed EC2 Instances

13. Hands On Labs - EBS - Creating and Sharing AMIs

14. Hands On Labs - EBS - Creating and Sharing AMIs - ENcryption Key permissions

15. EBS Redundant Array of Independent Disks (RAID) and EBS volumes

16. EBS Quiz 1 - 12 Questions

17. EBS Quiz 2 - 12 Questions

18. EBS Quiz 3 - 10 Questions


11. Core Knowledge - Master AWS EBS - Exam Scenario-based Question

1. Core Knowledge - AWS EBS Exam Scenario-based Set of Questions #1

2. Core Knowledge - AWS EBS Exam Scenario-based Set of Questions #2

3. Core Knowledge - AWS EBS Exam Scenario-based Set of Questions #3

4. Core Knowledge - AWS EBS Exam Scenario-based Set of Questions #4

5. Core Knowledge - AWS EBS Exam Scenario-based Set of Questions #5

6. Core Knowledge - AWS EBS Exam Scenario-based Set of Questions #6

7. Core Knowledge - AWS EBS Exam Scenario-based Set of Questions # 7

8. Core Knowledge - AWS EBS Exam Scenario-based Set of Questions #8

9. Core Knowledge - AWS EBS Exam Scenario-based Set of Questions #9

10. Core Knowledge - AWS EBS Exam Scenario-based Set of Questions #10



12. Core Knowledge - Master Elastic Load Balancer (ELB) Exam Required Knowledge

1. Core Knowledge - Elastic Load Balancer - Introduction

2. Core Knowledge - Elastic Load Balancer - How it works

3. Core Knowledge - How ELB works - Continued

4. Core Knowledge - How ELB works - Again !

5. Core Knowledge - ELB health checks

6. Core Knowledge - ELB Cross Zone Load Balancing

7. Core Knowledge - ELB Positioning - Internet-facing vs Internal ELB19. ELB Connection Draining

8. Core Knowledge - Refresher for TCP IP Packet flow

9. Core Knowledge - ELB  - Security Group

10. Core Knowledge - ELB - Network ACLs

11. Core Knowledge - ELB - Layer 4 TCPSSL Listeners

12. Core Knowledge - ELB - Layer 7 HTTPHTTPS Listeners

13. DEMO - ELB Service - Classical Load Balancer Overview and Lab Layout - Part 1

14. DEMO - ELB Service - Classical Load Balancer Overview and Lab Layout - Part 2

15. DEMO - ELB Service - Classical Load Balancer Overview and Lab Layout - Part 3

16. Core Knowledge - ELB & Sticky Sessions (Session Affinity)

17. Core Knowledge - ELB Security policy for SSLHTTPS sessions

18. SSLHTTPS authentication - Client and Server Certificates

19. ELB Connection Draining

20. ELB Monitoring

21. ELB Pre-Warming & Scaling

22. Testing you ELB scaling (or applications servers by ELB)

23. ELB Quiz 1 - 10 Questions

24. ELB Quiz 2 - 10 Questions

25. ELB Quiz 3 - 11 Questions


13. Core Knowledge - Master AWS ELB - Exam Scenario-based Question

1. Core Knowledge - AWS ELB Exam Scenario-based Set of Questions #1

2. Core Knowledge - AWS ELB Exam Scenario-based Set of Questions #2

3. Core Knowledge - AWS ELB Exam Scenario-based Set of Questions #3

4. Core Knowledge - AWS ELB Exam Scenario-based Set of Questions # 4

5. Core Knowledge - AWS ELB Exam Scenario-based Set of Questions # 5

6. Core Knowledge - AWS ELB Exam Scenario-based Set of Questions # 6

7. Core Knowledge - AWS ELB Exam Scenario-based Set of Questions # 7

8. Core Knowledge - AWS ELB Exam Scenario-based Set of Questions # 8

9. Core Knowledge - AWS ELB Exam Scenario-based Set of Questions #9

10. Core Knowledge - AWS ELB Exam Scenario-based Set of Questions #10

11. Core Knowledge - AWS ELB Exam Scenario-based Set of Questions # 11

12. Core Knowledge - AWS ELB Exam Scenario-based Set of Questions #12

13. Core Knowledge - AWS ELB Exam Scenario-based Set of Questions #13

14. Core Knowledge - AWS ELB Exam Scenario-based Set of Questions #14

15. Core Knowledge - AWS ELB Exam Scenario-based Set of Questions # 15

16. Core Knowledge - AWS ELB Exam Scenario-based Set of Questions #16

17. Core Knowledge - AWS ELB Exam Scenario-based Set of Questions #17

18. Core Knowledge - AWS ELB Exam Scenario-based Set of Questions #18

19. Core Knowledge - AWS ELB Exam Scenario-based Set of Questions #19


14. AWS Auto Scaling - Master Auto Scaling Exam Required Knowledge

1. Auto Scaling Introduction - The problem statement

2. Auto Scaling Components

3. Auto Scaling Features

4. Auto Scaling Availability Zone Rebalance feature

5. AddingDetaching EC2 instances tofrom Auto Scaling Groups

6. Auto Scaling and Elastic Load Balancing Service

7. Auto Scaling health checks

8. Auto Scaling Health Checks - SNS Notifications & Merging Auto Scaling groups

9. Hands On Lab (HOL) - Auto Scaling - Part 1

10. Hands On Lab (HOL) - Auto Scaling - Part 2

11. Auto Scaling policiesplans - Part1 - Scheduled Scaling

12. Auto Scaling policies (plans) -Part2 - On-Demand Simple Scaling

13. Auto Scaling policies (plans) - Part 3 - On-Demand Step Scaling

14. Auto Scaling policies (plans) - Part4 - Target Tracking Scaling

15. Auto Scaling - Monitoring

16. Auto Scaling Quiz 1 - 11 Questions

17. Auto Scaling Quiz 2



15. Core Knowledge - Master Auto Scaling Scenario-based Practice Questions

1. Core Knowledge - AWS Auto Scaling Exam Scenario-based Set of Questions #1

2. Core Knowledge - AWS Auto Scaling Exam Scenario-based Set of Questions # 2

3. Core Knowledge - AWS Auto Scaling Exam Scenario-based Set of Questions #3

4. Core Knowledge - AWS Auto Scaling Exam Scenario-based Set of Questions #4

5. Core Knowledge - AWS Auto Scaling Exam Scenario-based Set of Questions #5

6. Core Knowledge - AWS Auto Scaling Exam Scenario-based Set of Questions #6

7. Core Knowledge - AWS Auto Scaling Exam Scenario-based Set of Questions # 7


16. Introduction to Databases

1. Introduction to Relational Databases

2. Introduction to Non-Relational Databases (No-SQL)

3. Core Knowledge - RDS - Types and Examples of Non-Relational Databases


17. Relational Database Service (RDS)

1. Core Knowledge - RDS - Introduction lecture to the AWS RDS Service

2. Core Knowledge - RDS - Multi-AZ option

3. Core Knowledge - RDS - Subnet Groups

4. Hands On Lab - Creating an RDS instance

5. Core Knowledge - RDS - Automatic Backups

6. Core Knowledge - RDS - Manual Backups (Snapshots)

7. Core Knowledge - RDS - Multi-AZ part II

8. Core Knowledge -  RDS instance security and encryption

9. Core Knowledge - RDS - Read Replicas of RDS instances

10. Hands On Labs - RDS Instance Actions, Snapshots and Read Replicas

11. Core Knowledge - RDS - Billing and Reserved DB instances

12. Core Knowledge - RDS - Scaling

13. RDS Quiz 1 - 12 Questions

14. RDS Quiz 2 - 12 Questions

15. RDS Quiz 3 - 11 Questions



18. Amazon Aurora

1. Amazon Aurora - Introduction and Aurora Cluster Architecture

2. Amazon Aurora - End Points

3. Amazon Aurora Features - Autoscaling, Storage and Reliability, High Availability

4. Amazon Aurora - Security, Encryption, Global  DB, Aurora with other Services

5. Hands On Labs - Create an Amazon Aurora DB Cluster

6. Amazon Aurora Replication, Automated backup and Snapshots, Backtrack feature

7. Amazon Aurora Serverless



19. Core Knowledge - Master AWS RDS Scenario-based Practice Questions

1. RDS Exam Scenario-based Set of Questions #1

2. RDS Exam Scenario-based Set of Questions # 2

3. RDS Exam Scenario-based Set of Questions #3

4. RDS Exam Scenario-based Set of Questions #4

5. RDS Exam Scenario-based Set of Questions #5

6. RDS Exam Scenario-based Set of Questions #6

7. Core Knowledge - AWS RDS Exam Scenario-based Set of Questions #7


20. AWS Auditing, Monitoring, and Notification Services

1. Core Knowledge - AWS Simple Notification Service (SNS)

2. AWS Simple Notification Service - Reliability & Security

3. Hands On Labs - Simple Notification Service (SNS)

4. Introduction to AWS CloudTrail

5. CloudTrtail Log File Integrity Validation

6. Amazon CloudWatch - Introduction

7. CloudWatch - How it works, Cloud Watch Concepts

8. Amazon CloudWatch Alarms

9. Hands On Labs - CloudWatch Alarms

10. CloudWatch Logs - Introduction and Concepts

11. CloudWatch Logs Insights, CW and EC2, CloudTrail, S3 and ElasticSearch

12. Hands On Labs - CloudWatch Logs with VPC FlowLogs and CloudWatch Alarms

13. AWS CloudWatch Logs - CW Agent, Real Time Processing, and Cross Account Logging

14. CloudWatch Events

15. Hands On Labs - CloudWatch Events


21. Core Knowledge - Master Simple Storage Service (S3) Exam Required Knowledge

1. Amazon  S3 - Introduction to Object Storage

2. Amazon  S3 - Data Consistency models in distributed storage systems

3. Amazon S3 - AWS S3 Buckets - 1

4. Amazon  S3 - Objects

5. Amazon  S3 -  Buckets - 2

6. Hands On Labs - Creating a S3 bucket and uploading Objects

7. Amazon S3 - Bucket Versioning and MFA Delete

8. Hands On Labs - S3 Bucket Versioning

9. S3 - Managing Access and Access Policies

10. Amazon S3 Access Policy types

11. Amazon  S3 - Understanding Bucket and Object ACLs - Closer Look

12. Amazon  S3 - Copying  Uploading S3 Objects

13. S3 Storage Classes - Part 1

14. S3 Storage Classes - Part 2

15. Amazon  Glacier - Archive Retrieval in Glacier

16. Amazon  S3 Bucket LifeCycle Policies

17. Hands On Labs - S3 Lifecycle PoliciesRules

18. Amazon  S3 - Server Side Encryption (SSE)

19. Amazon  S3 Server Side Encryption - Detailed

20. Hands On Labs - S3 Server Side Encryption

21. Amazon  S3 - When to use Access Control Lists with Buckets & Object

22. Amazon  S3 - When to use Bucket and User Policies

23. Amazon S3 - Static Website Hosting in an S3 Bucket

24. Hands On Labs - S3 Static Website Hosting

25. Amazon S3 - Pre-Signed URLs

26. Amazon S3 - Cross Region Replication (CRR) 1

27. Amazon S3 - Cross Region Replication 2

28. S3 Same Region Replication - SRR

29. Hands On Labs - S3 - SRR and CRR

30. Amazon S3 - Cross Origin Resource Sharing (CORS)

31. Core Knowledge - S3 - Transfer Acceleration

32. Amazon S3 Performance considerations and best practices

33. Amazon S3 and Glacier SELECT

34. Core Knowledge - S3 - Billing

35. Core Knowledge - S3 - Notification and S3 Monitoring

36. Hands On Labs - Installing AWS CLI on Windows, MacOS, and Linux

37. S3 Quiz 1 - 9 Questions

38. S3 Quiz 2 - 9 Questions


22. Core Knowledge - Master AWS S3 Scenario-based Practice Questions

1. Core Knowledge - AWS S3Glacier Exam Scenario-based Set of Questions #1

2. Core Knowledge - AWS S3Glacier Exam Scenario-based Set of Questions #2

3. Core Knowledge - AWS S3Glacier Exam Scenario-based Set of Questions #3

4. Core Knowledge - AWS S3Glacier Exam Scenario-based Set of Questions #4


23. File System options - EFS and FSx

1. AWS Elastic File System (EFS) - Introduction to the Service and Mount Targets

2. AWS EFS - Use Cases , Use in On Premise Servers , Storage Classes, Pricing

3. Hands On Labs - EFS

4. AWS EFS - Data Encryption, EFS Data backup, EFS and AWS Datasync, Monitoring EFS|

5. AWS FSx for Windows File Server - Introduction, Deployment options & Encryption

6. Amazon FSx - Data Protection, BackupRestore, Access, Monitoring and Security

7. Hands On Labs - Amazon FSx for Windows File Server

8. Amazon FSx for Lustre

9. Amazon EFS vs FSx for Windows vs FSx for Lustre

10. Hands On Labs - Amazon FSx for Lustre


24. Amazon Route 53 - AWS's Domain Name System (DNS) Service

1. Amazon Route 53 - Introduction to DNS System and DNS Hierarchy

2. Amazon Route 53 - Registering Domains with Route 53

3. Amazon Route 53 - Steps to configure ROUTE 53

4. Hands On Labs - Registering a Domain Name with Route 53

5. Amazon Route 53 -  Hosted Zones

6. Amazon Route 53 -  Working with Hosted Zones

7. Hands On Labs - Creatingtesting the Lab setup and Route53 Simple Routing

8. Amazon Route 53 -  Supported DNS Record Types

9. Amazon Route 53 - Alias or No Alias - CNAME vs Alias records

10. Hands On Labs - Route 53 Alias Records

11. Amazon Route 53 - Health Checks

12. Amazon Route 53 - Routing Policies - Failover & Geolocation Routing

13. Hands On Labs - Route 53 Health Checks and Failover Routing

14. Hands On Labs -  Route 53 Geolocation (Geo) Routing

15. Amazon Route 53 - Latency and Weighted Routing Policies

16. Hands On Labs - Route 53 Latency Routing

17. Hands On Labs - Route 53 Weighted Routing

18. Hands On Lab - Amazon Route 53 MultiValue Answer

19. Amazon Route 53 Resolver

20. Hands on Labs - Route 53 Resolver

21. Amazon Route 53 - Pricing

22. AWS ROUTE 53 - Quiz 1 - 8 Questions

25. Core Knowledge - AWS CloudFront

1. Content Delivery Networks (CDNs) introduction

2. Amazon CloudFront - Static and Dynamic Content

3. Amazon CloudFront - Introduction

4. Amazon CloudFront Regional Edge Cache

5. Amazon CloudFront Distributions

6. Amazon CloudFront - Origin types

7. Amazon  CloudFront - Content Delivery

8. Amazon  CloudFront - Alternate Domain Names

9. Amazon CloudFront - Supported HTTP Methods & Serving Private Content

10. Amazon  CloudFront - Viewer & Origin Protocol -  & Object Invalidation

11. Hands On Labs - Configuring a Cloudfront Web Distribution with a S3 AWS  Origin

12. Amazon  CloudFront - Field Level Encryption & WAF & GeoRestriction

13. Amazon CloudFront - Video Streaming Access Logs & Cloudtrail Pricing

14. Global Accelerator

15. Hands-On Lab  Global Accelerator

16. AWS Cloudfront Quiz # 1 - 4 Questions


26. Messaging and Integration

1. AWS Simple Queue Service (SQS) - Introduction

2. AWS SQS - Polling types and SQS Timers

3. AWS SQS - Reliability, Security, and Encryption

4. Hands On Labs - Simple Queue Service and Integration with SNS

5. AWS SQS - Monitoring, SQS queue names, and Logging

6. Amazon MQ

7. AWS SQS Quiz 1 - 7 Questions

8. AWS SQS Scenario Based Practice Questions Set #1

9. AWS SQS Scenario Based Practice Questions Set #2


27. Amazon Serverless Services

1. Introduction to AWS Lambda

2. AWS Serverless (lambda-based) Applications building blocks

3. AWS Lambda Function Invocation types

4. AWS Lambda Triggers - Event Sources that can Trigger a Lambda Function

5. Hands On Labs - Lambda and S3 as a Trigger

6. AWS Lambda Use Cases

7. AWS Lamda Scaling, Versioning, and Service Limits

8. AWS Lambda - Operatins and Monitoring

9. AWS Lambda@Edge

10. AWS Lambda Quiz 1 - 5 Questions

11. AWS Lambda Scenario Based Questions Set # 1

12. Introduction to AWS API Gateway

13. API Gateway Architecture , API Methods and Resources

14. API Gateway - Scaling - Throttling and Caching

15. API Gateway CORS & API Operations and Monitoring

16. Hands On Labs - API Gateway

17. AWS API Gateway Quiz 1 - 7 Questions

18. AWS API Gateway Scenario Based Practice Questions Set # 1

19. AWS API Gateway Scenario Based Practice Questions Set # 2

20. AWS DynamoDB - Review of NoSQL and Data Types

21. DynamoDB Introduction

22. DynamoDB tables, components, Primary Key

23. DynamoDB Table Throughput

24. Hands On Labs - Creating a DynamoDB SImple Key table and Global Tables

25. DynamoDB - Local and Global Secondary Indexes

26. Hands on Labs - DynamoDB Local and Global Secondar Indexes

26.1 DynamoDB Global Local Secondary Indexes 1

26.2 DynamoDB Global Local Secondary Indexes 2

27. DynamoDB Local,  BackupRestore, Point in Time Reovery,  and TTL

28. Hands On Labs - DynamoDB Point in Time Recovery, On Demand Backup and TTL

29. DynamoDB Accelerator (DAX)

30. DynamoDB Streams

31. DynamoDB Transactions

32. Amazon Neptune and DocumentDB (With MongoDB compatibility)

33. 7-5) DynamoDB Scalability, Throttling, and Limits

34. DynamoDB Quiz 1  - 11 Questions

35. DynamoDB Scenario Based Practice Questions Set # 1

36. DynamoDB  Scenarios Based Practice Questions Set # 2



28. AWS Caching, Big Data, Data Streaming, Analytics,  and IoT Services

1. Amazon Elastic Map Reduce - Introduction

2. AWS EMR - Clusters, Nodes, and deployment in an AZ

3. Amazon Elasticache Introduction

4. Amazon ElastiCache - Caching Strategies

5. Amazon  Elasticache for Memcached

6. Amazon  Elasticache for Redis

7. Amazon  ElastiCache Quiz # 1 - 7 Questions

8. Amazon Elasticache Scenario Based Questions set #1

9. Amazon Elasticache Scenario Based Questions set #2

10. Amazon Elasticache Scenario Based Questions set # 3

11. 5-1) AWS Kinesis - Introduction

12. 5-2) AWS Kinesis Data Streams

13. Hands On Labs - Amazon Kinesis Data Streams

13.1 Kinesis lab

14. 5-3) AWS Kinesis Data Firehose

14.1 Kinesis lab

15. Hands On Labs - Kinesis Firehose

16. 5-4) AWS Kinesis Analytics

17. AWS Kinesis Quiz 1

18. 5-5) AWS Kinesis Scenario Based Practice Questions Set # 1

19. 5-6) AWS Kinesis Scenario Based Practice Questions Set # 2

20. 5-7) AWS Kinesis Scenario Based Practice Questions Set # 3

21. 5-8) AWS Kinesis Scenario Based Practice Questions Set # 4

22. 4-1) AWS Redshift - Introduction

23. 4-2) AWS Redshift BackupRestore & Monitoring

24. 4-3) AWS Redshift - High Availability, Data Durability, Scaling and Billing

25. 7-6) DynamoDB integration with RedShift and AWS EMR plus Best Practices

26. AWS Redshift Quiz 1 - 3 Questions

27. 4-4) AWS Redshift Scenario Based Practice Questions Set # 1


29. AWS Services and Strategies for Deployment Management

1. Amazon CloudFormation

2. Amazon Cloudformation Template Components

3. Hands On Labs - Creating a CF Stack, Updating a Stack, and Stack Change Sets

4. AWS OpsWorks - Introduction

5. AWS Opsworks Stacks and Layers

6. AWS OpsWorks Components

7. AWS Elastic BeanStalk - Introduction

8. Elastic BeanStalk - Components and Concepts


30. Amazon Elastic Container Services (ECS)

1. 8-1) AWS EC2 (Elastic) Container Service [ECS] - Why we need it

2. 8-2) AWS ECS - Introduction to Docker

3. 8-3) Introduction to AWS ECS

4. 8-4) AWS ECS Launch Types

5. 8-5) AWS ECS - Task Definitions and Tasks

6. 8-6) AWS ECS and IAM Roles

7. Elastic Kubernetes Service (EKS)

8. AWS ECS Quiz 1 - 5 Questions

9. AWS ECS Scenario Based Practice Questions Set # 1

10. AWS ECS Scenario Based Practice Questions Set # 2


31. Core Knowledge - Application Load Balancer

1. Classic Load Balancer Refresher and Weaknesses

2. Introduction to Application Load Balancer

3. ALB Components explained

4. Hands on Labs - Creating Route Targets and Application Load Balancer

5. ALB Listener Rules

6. ALB Content Routing ( Host and Path based routing)

7. ALB - Containers and Microservices Support

8. ALB and ECS Dynamic Host Port Mapping

9. Hands-On Lab  ECS and Application Load Balancer

10. Monitoring the ALB


32. Amazon Network Load Balancer (NLB)

1. Amazon ELB -  Network Load Balancing (NLB) - Agenda

2. Quick Amazon ELB Recap

3. Amazon NLB - L2.5 Features and How it Works

4. Amazon NLB - Features and How it Works (Cont.)

5. Amazon NLB - Supported Target Types

6. Amazon NLB - Source IP Address Preservation

7. Amazon NLB - Proxy Protocol - Health Checks - Monitoring

8. Amazon NLB - Troubleshooting Some Common Problems


33. AWS Identity and Access Management ( IAM ) and AWS Directory Services

1. AWS Directory Services - Introduction

2. AWS Services - AWS Microsoft Active Directory - 1

3. AWS Directory Service - AWS Microsoft AD - 2

4. AWS Directory Service - AWS Simple AD

5. AWS Services - AWS Directory Services - AWS Connector

6. AWS Directory Service  Quiz # 1 - 5 Questions

7. AWS Directory Service - Scenario Based Questions Set # 1

8. AWS Directory Service - Scenario Based Questions Set # 2

9. AWS Identity and Access Management (IAM) - Introduction

10. IAM - Features

11. IAM Elements

12. IAM Elements (Actions and Resources)

13. IAM & Identity Federation

14. IAM - Identities - Groups , Roles and Temporary Credentials

15. IAM Identity-based and Resource-based policies

16. IAM Users deep dive

17. IAM User Credentials detailed

18. IAM Roles

19. IAM Service Roles

20. IAM Role Delegation

21. AWS Cross-Account Access

22. Cross Account Access with External ID

23. IAM Users and Roles - When to use what

24. IAM Logging using AWS CloudTrail

25. IAM Best Practices

26. AWS IAM Quiz 1 - 12 Questions

27. AWS IAM Scenario Based Practice Questions Set # 1

28. AWS IAM Scenario Based Practice Questions Set # 2

29. Security Token Service (STS) - Introduction

30. WebID Federation and STS Credentials

31. Using STS Security Credentials - WebID Federation example

32. Web Identity Federation using STS

33. Identity Federation with SAML 2.0 - AWS API Access

34. Identity Federation with SAML 2.0 (Console Access

35. AWS Single Sign-On (SSO)

36. AWS Web Application Firewall (WAF)

37. Hands-On Lab  AWS WAF

38. AWS Secure Token Service (STS) Quiz 1 - 5 Questions

39. AWS STSID Federation Scenario Based Practice Questions Set # 1


34. Amazon Data Migration Services and Hybrid Cloud

1. AWS Snowball

2. AWS Snowball, SnowBall Edge, Snowmobile

3. Hands On Labs - AWS Snowball Console Walkthrough

4. AWS Storage Gateway - Snowball - VM ImportExport

5. Hands On Labs - AWS Storage Gateway console Walkthough

6. AWS DataBase Migration Service (AWS DMS)

7. AWS Database Migration Services - How it works and Schema Conversion Tool (SCT)

8. AWS DMS Components - Replication Instance, Multi AZ, DMS and VPC

9. AWS Server Migration Service (SMS)


35. Misc Services

1. AWS Organizations

2. AWS Organizations - Components

3. AWS Organizations - Features

4. Hands-On Lab  AWS Organizations and Service Control Policies

5. AWS Glue - What is it, how it works, and its components

6. AWS Systems Manager

7. AWS Parameter Store

8. AWS Secrets Manager

9. Hands-On Lab  Secrets Manager and Parameter Store

10. Amazon Athena

11. Hands On Labs - Amazon Athena

11.1 Athena DLL file

12. AWS Service Catalog

13. AWS Config

14. AWS Batch

15. Amazon X-Ray

16. AWS Resource Access Manager (AWS RAM)

17. Amazon SWF - Introduction , Benefits and Concepts

18. AWS SWF - Concepts , Task Types and Endpoints

19. AWS Step Functions

20. Amazon GuardDuty

21. Amazon Inspector

22. Amazon Macie

23. Amazon WorkSpaces

24. Amazon WorkDocs

25. AppSync

26. Elastic Transcoder

27. Elasticsearch


36. Wrap Up

1. Exam Blueprint and AWS website tour (certification and documentation knowledge)

2. Requesting 30 Minutes additional Exam time for Non Native English Speakers