Monday, October 7, 2024

Title Changed -- reflects my journey

 

The title "Evolving Architect: Combining Data, Design, and Project Management" captures my journey as I grow from data-centric expertise to a broader role that encompasses infrastructure design and project management. It reflects a dynamic progression and emphasizes a well-rounded skill set, likely resonating with all of you readers interested in comprehensive IT and infrastructure insights. This title is concise yet reflective of my expanded focus across multiple facets of IT architecture.

Wednesday, August 14, 2024

My Technologies Journey

 It’s fascinating how my passion for databases has shaped my journey. From the early days of exploring data structures to understanding complex database systems, my love for this field has only deepened. I continue to marvel at the rapid technological advances not only in databases but across various domains. Each innovation inspires me, pushing me to stay curious and engaged with emerging trends that can transform the way we manage and utilize data.


As I reflect on my personal and professional growth, I realize that my skills have significantly expanded. I’ve transitioned from focusing solely on databases to embracing broader concepts like infrastructure architecture and design. This evolution has opened up new horizons, allowing me to approach challenges with a more holistic mindset. I’m excited to dive deeper into these areas, exploring how they intersect with my foundational knowledge of databases.


With this expanded skill set in mind, I believe it’s time to update my blog’s heading to better reflect my current interests and expertise. As I tackle infrastructure architecting and project management, I want my content to resonate with those who share a similar passion for technology and innovation. While I’m considering various headings, I’m open to suggestions and would love to hear any ideas that could help capture this exciting new chapter of my journey.



Saturday, February 3, 2024

SQL Server on AWS

  SQL Server on AWS


Good Option available to install  MS SQL server binaries on EC2.

Advantages:

Full Control: You can launch an EC2 instance and install SQL Server manually. This gives you more control over configurations.

Customization: Choose your operating system and SQL Server version.

Licensing: You can either bring your own license (BYOL) or use AWS's license-included options.


Saturday, December 2, 2023

Data Loss Prevention (DLP) Tools


Data Loss Prevention (DLP) tools are crucial for protecting sensitive information/ data from unauthorized access, misuse, or loss. Compliance with Industry Standard security requirements  are met with these tools. They monitor and control data movement across various systems.

Some prominent DLP tools:


Symantec Data Loss Prevention (DLP): Offers comprehensive data protection across endpoints, networks, and storage. It includes advanced content inspection and contextual analysis to safeguard sensitive data.


McAfee Total Protection for Data Loss Prevention: Provides robust protection for sensitive data with policy enforcement across endpoints, networks, and cloud environments. It integrates well with McAfee’s broader security solutions.


Forcepoint Data Loss Prevention: Utilizes behavioral analytics to detect and prevent data breaches. It focuses on user activity and data usage patterns to enforce security policies.


Microsoft 365 Data Loss Prevention: Integrated into Microsoft 365, it offers protection for data within Office apps, email, and OneDrive. It provides policy templates and real-time monitoring capabilities.


Digital Guardian Data Loss Prevention: Specializes in protecting intellectual property and sensitive data across endpoints, networks, and cloud environments with flexible policy management.


Trend Micro Data Loss Prevention: Provides endpoint and network DLP solutions with advanced threat intelligence and integration with Trend Micro’s broader security suite.


IBM Security Guardium: Focuses on database activity monitoring and data protection, offering robust analytics and real-time alerts to safeguard sensitive data.


Varonis Data Security Platform: Monitors data access and usage across file systems and email systems, providing insights into potential risks and ensuring compliance with data protection regulations.


Proofpoint Enterprise DLP: Delivers content inspection and monitoring to protect data across email, cloud, and endpoints, with a focus on preventing data breaches and compliance violations.


Check Point Data Loss Prevention: Integrates with Check Point’s security infrastructure to provide policy enforcement and data protection across various platforms and applications.

These few listed tools are versatile to ensure that sensitive information or as we say data remains protected and complaint with Industry regulations.


Thursday, June 5, 2014

Sharing a script that logs all long running queries, kills them :) and sends out an email alert to my inbox. I generally modify it according to the production requirement. In my environment there are a couple of production servers where I would setup this script to only alert and Not kill any query.
Please double test before implementing in realtime production environment.Follow the general thumb rule to implement all changes/scripts from lower to higher env. :) this script comes with disclaimer that person using has complete ownership for it’s results.
Ensure postfix installed and configured for emails to work. The script works for postgre user since postgre has privs over all databases in the server
############################
if [ `whoami` != "postgres" ]; then
exit 0;
fi
# selecting non-idle queries which are running since at least 6 minutes.
psql -c "select pid, client_addr, query_start, current_query from pg_stat_activity
where current_query != '<IDLE>' and current_query != 'COPY' and current_query != 'VACUUM' and query_start + '6 min'::interval < now()
and substring(current_query, 1, 11) != 'autovacuum:'
order by query_start desc" > $LOGFILE
NUMBER_OF_STUCK_QUERIES=`cat $LOGFILE | grep "([0-9]* row[s]*)" | sed 's/(//' | awk '{ print $1}'`
if [ $NUMBER_OF_STUCK_QUERIES != 0 ]; then
# Getting the first column from the output discarding alphfanumeric values (table elements in psql's output).
STUCK_PIDS=`cat $LOGFILE | sed "s/([0-9]* row[s]*)//" | awk '{ print $1 }' | sed "s/[^0-9]//g"`
for PID in $STUCK_PIDS; do
echo -n "Cancelling PID $PID ... " >> $LOGFILE

# "t" means the query is successfully cancelled.
SUCCESS=`psql -c "SELECT pg_cancel_backend($PID);" | grep " t"`
if [ $SUCCESS ]; then
SUCCESS="OK.";
else
SUCCESS="Failed.";
fi
echo $SUCCESS >> $LOGFILE
done

cat $LOGFILE | mail -s "Stuck PLpgSQL processes detected and killed that were running over 6 minutes." youremail@whatever.com;

fi

rm $LOGFILE
#######################################


Saturday, February 1, 2014

MERGE Feature in PostgreSQL

From PostgreSQL 9.1, onwards user can implement the feature of Merge using the writable CTE (Common Table Expressions)
WITH  provides a way to write auxiliary statements for use in a larger query. This can be thought of as defining temporary tables that exist just for one query.Each auxiliary statement in a WITH clause can be a SELECT,INSERT,UPDATE or DELETE and the clause WITH witself is attached to a primary statement that can also cause a SELECT,INSERT,UPDATE or DELETE

Created 2 tables and insert some data into it. Now merge the 2 tables 

[root@ip--- ~]# psql tejidatabase
psql (9.2.4.8)
Type "help" for help.

tejidatabase=# select * from testmerge;
 pid | age |  name  
-----+-----+--------
   2 |  48 | RAM
   4 |  61 | SHYAM
   6 |  85 | SONIA
   8 |  44 | RAHUL
  10 |  34 | MAMTA
  12 |  45 | SURJIT
  14 |  21 | ISHIKA
  16 |  19 | IPSA
(8 rows)

tejidatabase=# select * from merge2;
 pid | age |  name  
-----+-----+--------
  18 |  56 | YASUDA
   8 |   0 | RAHUL
  14 |   5 | ISHIKA
(3 rows)

tejidatabase=# WITH upsert as (update merge2 m set age=d.age+100 ,name=d.name from testmerge d where m.pid=d.pid RETURNING m.*) insert into merge2 select a.pid,a.age,'NEW' from testmerge a where a.pid not in ( select b.pid from upsert b);
INSERT 0 6
tejidatabase=# select * from merge2;
 pid | age |  name  
-----+-----+--------
  18 |  56 | YASUDA
   8 | 144 | RAHUL
  14 | 121 | ISHIKA
   2 |  48 | NEW
   4 |  61 | NEW
   6 |  85 | NEW
  10 |  34 | NEW
  12 |  45 | NEW
  16 |  19 | NEW
(9 rows)

tejidatabase=# 

As you can see all the rows of test merge are now added in merge2 with name=’NEW’ and the matching pid of test merge and merge2, the ages have been added by 100.

The magic of Writable CTE which can make UPSERT in PostgreSQL

:-)

Saturday, May 11, 2013

A few practical required PostgreSQL commands.

1. Check PostgreSQL server Status, stop and start.
Depending on what version of PostgreSQL
Check Service status ( # /etc/init.d/ppas-9.2 status
Stop Service (#  service ppas-9.2 stop)
Start Service (#  service ppas-9.2 stop)

[root@ip—~]# /etc/init.d/ppas-9.2 status
pg_ctl: no server running
[root@ip-~]# service ppas-9.2 start
Starting Postgres Plus Advanced Server 9.2: 
waiting for server to start.... done
server started
Postgres Plus Advanced Server 9.2 started successfully
[root@ip-~]# /etc/init.d/ppas-9.2 status
pg_ctl: server is running (PID: 8595)
/opt/PostgresPlus/9.2AS/bin/edb-postgres "-D" "/opt/PostgresPlus/9.2AS/data"

2. Confirm PostgreSQL version
# select version();
[root@ip- ~]# psql
psql (9.2.4.8)
Type "help" for help.

edb=# select version();
                                                      version                   
                                   
--------------------------------------------------------------------------------
-----------------------------------
 EnterpriseDB 9.2.4.8 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2 2
0080704 (Red Hat 4.1.2-52), 64-bit
(1 row)

edb=# 

3. List out the databases in your connected PostgreSQL server
# \l 
Note: above is slash with lowercase L
# select datname from pg_database;

edb=# select datname,datcollate,datctype,datconnlimit from pg_database;
   datname    | datcollate |  datctype  | datconnlimit 
--------------+------------+------------+--------------
 template1    | en_US.UTF8 | en_US.UTF8 |           -1
 template0    | en_US.UTF8 | en_US.UTF8 |           -1
 postgres     | en_US.UTF8 | en_US.UTF8 |           -1
 edb          | en_US.UTF8 | en_US.UTF8 |           -1
 tejidatabase | en_US.UTF8 | en_US.UTF8 |           -1
 testdb       | en_US.UTF8 | en_US.UTF8 |           -1
(6 rows)

edb=# \l
                                  List of databases
     Name     |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges   
--------------+----------+----------+------------+------------+-----------------------
 edb          | postgres | UTF8     | en_US.UTF8 | en_US.UTF8 | 
 postgres     | postgres | UTF8     | en_US.UTF8 | en_US.UTF8 | 
 tejidatabase | postgres | UTF8     | en_US.UTF8 | en_US.UTF8 | =Tc/postgres         +
              |          |          |            |            | postgres=CTc/postgres+
              |          |          |            |            | teji=CTc/postgres    +
              |          |          |            |            | testuser=CTc/postgres
 template0    | postgres | UTF8     | en_US.UTF8 | en_US.UTF8 | =c/postgres          +
              |          |          |            |            | postgres=CTc/postgres
 template1    | postgres | UTF8     | en_US.UTF8 | en_US.UTF8 | =c/postgres          +
              |          |          |            |            | postgres=CTc/postgres
 testdb       | teji     | UTF8     | en_US.UTF8 | en_US.UTF8 | 

(6 rows)

4. Last but not the least is to be able to get psql commands help and information.
# \?
Note: will show command prompt help
# \h SELECT
Note: will show details about the select command
This can be used for checking syntax of any psql commands.

edb=# \h select
Command:     SELECT
Description: retrieve rows from a table or view
Syntax:
[ WITH [ RECURSIVE ] with_query [, ...] ]
SELECT [ ALL | DISTINCT [ ON ( expression [, ...] ) ] ]
    * | expression [ [ AS ] output_name ] [, ...]
    [ FROM from_item [, ...] ]
    [ WHERE condition ]
    [ GROUP BY expression [, ...] ]
    [ HAVING condition [, ...] ]
    [ WINDOW window_name AS ( window_definition ) [, ...] ]
    [ { UNION | INTERSECT | EXCEPT } [ ALL | DISTINCT ] select ]
    [ ORDER BY expression [ ASC | DESC | USING operator ] [ NULLS { FIRST | LAST } ] [, ...] ]
    [ LIMIT { count | ALL } ]
    [ OFFSET start [ ROW | ROWS ] ]
    [ FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } ONLY ]
    [ FOR { UPDATE | SHARE } [ OF table_name [, ...] ] [ NOWAIT ] [...] ]

where from_item can be one of:

    [ ONLY ] table_name [ * ] [ [ AS ] alias [ ( column_alias [, ...] ) ] ]
    ( select ) [ AS ] alias [ ( column_alias [, ...] ) ]
    with_query_name [ [ AS ] alias [ ( column_alias [, ...] ) ] ]
    function_name ( [ argument [, ...] ] ) [ AS ] alias [ ( column_alias [, ...] | column_definition [, ...] ) ]
    function_name ( [ argument [, ...] ] ) AS ( column_definition [, ...] )
    from_item [ NATURAL ] join_type from_item [ ON join_condition | USING ( join_column [, ...] ) ]

and with_query is:

    with_query_name [ ( column_name [, ...] ) ] AS ( select | values | insert | update | delete )

TABLE [ ONLY ] table_name [ * ]

edb=# 

Title Changed -- reflects my journey

  The title "Evolving Architect: Combining Data, Design, and Project Management" captures my journey as I grow from data-centric e...