Monday, October 7, 2024

Title Changed -- reflects my journey

 

The title "Evolving Architect: Combining Data, Design, and Project Management" captures my journey as I grow from data-centric expertise to a broader role that encompasses infrastructure design and project management. It reflects a dynamic progression and emphasizes a well-rounded skill set, likely resonating with all of you readers interested in comprehensive IT and infrastructure insights. This title is concise yet reflective of my expanded focus across multiple facets of IT architecture.

Wednesday, August 14, 2024

My Technologies Journey

 It’s fascinating how my passion for databases has shaped my journey. From the early days of exploring data structures to understanding complex database systems, my love for this field has only deepened. I continue to marvel at the rapid technological advances not only in databases but across various domains. Each innovation inspires me, pushing me to stay curious and engaged with emerging trends that can transform the way we manage and utilize data.


As I reflect on my personal and professional growth, I realize that my skills have significantly expanded. I’ve transitioned from focusing solely on databases to embracing broader concepts like infrastructure architecture and design. This evolution has opened up new horizons, allowing me to approach challenges with a more holistic mindset. I’m excited to dive deeper into these areas, exploring how they intersect with my foundational knowledge of databases.


With this expanded skill set in mind, I believe it’s time to update my blog’s heading to better reflect my current interests and expertise. As I tackle infrastructure architecting and project management, I want my content to resonate with those who share a similar passion for technology and innovation. While I’m considering various headings, I’m open to suggestions and would love to hear any ideas that could help capture this exciting new chapter of my journey.



Wednesday, July 3, 2024

Briefly: ZPTR

 Zero Trust Packet Routing is an advanced cybersecurity concept that applies the Zero Trust framework to network traffic. Unlike traditional models that rely on perimeter defenses, Zero Trust assumes that no traffic, internal or external, is inherently trusted. It verifies every packet at each stage to ensure authenticity, integrity, and compliance with security policies.


Central to this approach is the principle of least-privilege access and rigorous identity verification. Every packet, even from authenticated users, undergoes inspections based on user identity, device posture, and contextual factors like location and time. This continuous verification employs mechanisms such as segmentation and context-based routing decisions.


Key Components of Zero Trust Packet Routing:

Microsegmentation

Context-Aware Routing

Identity and Access Management (IAM)

Advanced Packet Filtering

Encryption and Data Integrity

Saturday, February 3, 2024

SQL Server on AWS

  SQL Server on AWS


Good Option available to install  MS SQL server binaries on EC2.

Advantages:

Full Control: You can launch an EC2 instance and install SQL Server manually. This gives you more control over configurations.

Customization: Choose your operating system and SQL Server version.

Licensing: You can either bring your own license (BYOL) or use AWS's license-included options.


Saturday, December 2, 2023

Data Loss Prevention (DLP) Tools


Data Loss Prevention (DLP) tools are crucial for protecting sensitive information/ data from unauthorized access, misuse, or loss. Compliance with Industry Standard security requirements  are met with these tools. They monitor and control data movement across various systems.

Some prominent DLP tools:


Symantec Data Loss Prevention (DLP): Offers comprehensive data protection across endpoints, networks, and storage. It includes advanced content inspection and contextual analysis to safeguard sensitive data.


McAfee Total Protection for Data Loss Prevention: Provides robust protection for sensitive data with policy enforcement across endpoints, networks, and cloud environments. It integrates well with McAfee’s broader security solutions.


Forcepoint Data Loss Prevention: Utilizes behavioral analytics to detect and prevent data breaches. It focuses on user activity and data usage patterns to enforce security policies.


Microsoft 365 Data Loss Prevention: Integrated into Microsoft 365, it offers protection for data within Office apps, email, and OneDrive. It provides policy templates and real-time monitoring capabilities.


Digital Guardian Data Loss Prevention: Specializes in protecting intellectual property and sensitive data across endpoints, networks, and cloud environments with flexible policy management.


Trend Micro Data Loss Prevention: Provides endpoint and network DLP solutions with advanced threat intelligence and integration with Trend Micro’s broader security suite.


IBM Security Guardium: Focuses on database activity monitoring and data protection, offering robust analytics and real-time alerts to safeguard sensitive data.


Varonis Data Security Platform: Monitors data access and usage across file systems and email systems, providing insights into potential risks and ensuring compliance with data protection regulations.


Proofpoint Enterprise DLP: Delivers content inspection and monitoring to protect data across email, cloud, and endpoints, with a focus on preventing data breaches and compliance violations.


Check Point Data Loss Prevention: Integrates with Check Point’s security infrastructure to provide policy enforcement and data protection across various platforms and applications.

These few listed tools are versatile to ensure that sensitive information or as we say data remains protected and complaint with Industry regulations.


Thursday, June 5, 2014

Sharing a script that logs all long running queries, kills them :) and sends out an email alert to my inbox. I generally modify it according to the production requirement. In my environment there are a couple of production servers where I would setup this script to only alert and Not kill any query.
Please double test before implementing in realtime production environment.Follow the general thumb rule to implement all changes/scripts from lower to higher env. :) this script comes with disclaimer that person using has complete ownership for it’s results.
Ensure postfix installed and configured for emails to work. The script works for postgre user since postgre has privs over all databases in the server
############################
if [ `whoami` != "postgres" ]; then
exit 0;
fi
# selecting non-idle queries which are running since at least 6 minutes.
psql -c "select pid, client_addr, query_start, current_query from pg_stat_activity
where current_query != '<IDLE>' and current_query != 'COPY' and current_query != 'VACUUM' and query_start + '6 min'::interval < now()
and substring(current_query, 1, 11) != 'autovacuum:'
order by query_start desc" > $LOGFILE
NUMBER_OF_STUCK_QUERIES=`cat $LOGFILE | grep "([0-9]* row[s]*)" | sed 's/(//' | awk '{ print $1}'`
if [ $NUMBER_OF_STUCK_QUERIES != 0 ]; then
# Getting the first column from the output discarding alphfanumeric values (table elements in psql's output).
STUCK_PIDS=`cat $LOGFILE | sed "s/([0-9]* row[s]*)//" | awk '{ print $1 }' | sed "s/[^0-9]//g"`
for PID in $STUCK_PIDS; do
echo -n "Cancelling PID $PID ... " >> $LOGFILE

# "t" means the query is successfully cancelled.
SUCCESS=`psql -c "SELECT pg_cancel_backend($PID);" | grep " t"`
if [ $SUCCESS ]; then
SUCCESS="OK.";
else
SUCCESS="Failed.";
fi
echo $SUCCESS >> $LOGFILE
done

cat $LOGFILE | mail -s "Stuck PLpgSQL processes detected and killed that were running over 6 minutes." youremail@whatever.com;

fi

rm $LOGFILE
#######################################


Saturday, February 1, 2014

MERGE Feature in PostgreSQL

From PostgreSQL 9.1, onwards user can implement the feature of Merge using the writable CTE (Common Table Expressions)
WITH  provides a way to write auxiliary statements for use in a larger query. This can be thought of as defining temporary tables that exist just for one query.Each auxiliary statement in a WITH clause can be a SELECT,INSERT,UPDATE or DELETE and the clause WITH witself is attached to a primary statement that can also cause a SELECT,INSERT,UPDATE or DELETE

Created 2 tables and insert some data into it. Now merge the 2 tables 

[root@ip--- ~]# psql tejidatabase
psql (9.2.4.8)
Type "help" for help.

tejidatabase=# select * from testmerge;
 pid | age |  name  
-----+-----+--------
   2 |  48 | RAM
   4 |  61 | SHYAM
   6 |  85 | SONIA
   8 |  44 | RAHUL
  10 |  34 | MAMTA
  12 |  45 | SURJIT
  14 |  21 | ISHIKA
  16 |  19 | IPSA
(8 rows)

tejidatabase=# select * from merge2;
 pid | age |  name  
-----+-----+--------
  18 |  56 | YASUDA
   8 |   0 | RAHUL
  14 |   5 | ISHIKA
(3 rows)

tejidatabase=# WITH upsert as (update merge2 m set age=d.age+100 ,name=d.name from testmerge d where m.pid=d.pid RETURNING m.*) insert into merge2 select a.pid,a.age,'NEW' from testmerge a where a.pid not in ( select b.pid from upsert b);
INSERT 0 6
tejidatabase=# select * from merge2;
 pid | age |  name  
-----+-----+--------
  18 |  56 | YASUDA
   8 | 144 | RAHUL
  14 | 121 | ISHIKA
   2 |  48 | NEW
   4 |  61 | NEW
   6 |  85 | NEW
  10 |  34 | NEW
  12 |  45 | NEW
  16 |  19 | NEW
(9 rows)

tejidatabase=# 

As you can see all the rows of test merge are now added in merge2 with name=’NEW’ and the matching pid of test merge and merge2, the ages have been added by 100.

The magic of Writable CTE which can make UPSERT in PostgreSQL

:-)

Title Changed -- reflects my journey

  The title "Evolving Architect: Combining Data, Design, and Project Management" captures my journey as I grow from data-centric e...