Monday, April 2, 2012

Book Review - High Performance MySQL 3rd Edition

Add to Technorati Favorites Ver este articulo en Español

This is THE MySQL performance book. Period!



Every chapter is very well crafted, with a precise balance between theory and practice, and full of invaluable nuggets, sometimes transcending the MySQL arena and applicable to any database! Such cases are Chapter 2-Benchmarking MySQL and Chapter 3-Profiling Server Performance, very solid foundations for the reading ahead.

All over the text, authors propose tools, examples of use and proven diagnostic techniques, that will greatly improve your performance firefighter skills and enhance your knowledge of MySQL internals. Nevertheless, what I liked the most from this book is taking into consideration the physical part of database structures in play when speaking about performance, which is something most authors don't include; plus treatment on MySQL high availability and cloud features, which we'll increasingly see on customers.

As you may know, MySQL architecture relies on what is called “Storage Engines” and this book provides resolution down to that level, describing behavior, pros and cons among each major storage engine, plus some improvements coming for MySQL 5.6 (now in beta stage). That is cool!!!

However, there is something that kind of bugged me at times and was references to a specific commercial MySQL offering and tools, but after taking a look to that company web-site, found that is a great contributor to the MySQL community and you can actually use the tools!

At the O'Reilly website you can take a look to the book's TOC, bet you will find it very complete; hey! you can even peek the content with Google Preview, just follow the link: High Performance MySQL

If you are interested on Mathematical Rigorous methods for performance tuning see Performance Enlightening - The Craig Effect (Tropa de Elite)

 Subscribe to Oracle Database Disected by Email Follow IgnacioRuizMX on Twitter Delicious Bookmark this on Delicious

Thursday, March 29, 2012

Performance Enlightening - Craig Effect (Tropa de Elite)

Add to Technorati Favorites Ver este articulo en Español
 
During the past week I had the opportunity to assist both the Oracle Performance Firefighting and the 
Advanced Oracle Performance Analysis and only can say: awesome!!!
 
Craig Shallahamer is a great teacher and has a lot of resources to effectively share knowledge, even if 
the subject is complex like buffer cache structures or arid like the mathematical foundation required for 
perf analysis. That is not all, he provides valuable tips of the trade or anecdote nuggets, dipped on a 
very fresh and sometimes humorist perspective.
 
The result: we started searching for those AWRs and Statspack reports that were difficult to analyze, I even 
recalled some unsolved performance cases that stained my record; difference now, is we all have the 
analytical elements to properly handle these and new challenges.
 
Here a group picture of "The Elite Squad" (Tropa de Élite)
Tropa de Élite
Sao Paulo, Brazil / March 19th-23th 2012
 If you want to know more about these trainings or get more insight on Craig's methodologies and research, please click 
the image below (a new window will show up, you may need to allow this pop-up).
 
 
Subscribe to Oracle Database Disected by Email Follow IgnacioRuizMX on Twitter Delicious Bookmark this on Delicious

Sunday, March 4, 2012

Book Review - iOS 5 Programming Cookbook

Add to Technorati FavoritesVer este articulo en Español

You will write iOS apps in less than you think





Being a C/BASIC hardcore programmer for mobile devices in a past life, I enjoyed very much this book because it really helped me grasp quickly many basic and intermediate topics on iOS 5 Objective-C programming, would say after reading the full book I have all needed to program good basic applications; and even after getting practice and experience, guess I will eventually resort on this book as reference. In my opinion this book is awesome and a must for beginners and intermediates.


Honestly, I consider myself not very much fan to the Cookbook format, however this is one case where the author, Vandad Nahavandipoor, exploits it wonderfully to show and develop case after case of what you will face as an iOS programmer, both for iPad and iPhone! He misses nothing and includes good explanations and concise screenshots when needed.


Now, the table of contents covers almost everything is needed to develop any type of application: Location & Maps, Networking, Audio & Video, plus some “must-have” topics like Multitasking, Graphics & Animations or Core Motion. My favorite chapter is #2, because it provides the building blocks for GUI programming, including corresponding source code examples!

Something I would like to see is bluetooth or network programming basics, but maybe I'm overreaching and those are advanced topics, out of the scope if this book.

If you want to take a closer look go to the Product Page at O'Reilly website: iOS Programming Cookbook - Book I'm sure you will get interested!


Subscribe to Oracle Database Disected by EmailFollow IgnacioRuizMX on TwitterDelicious Bookmark this on Delicious

Sunday, November 27, 2011

Who is using your Undo space? - Improved Script

Add to Technorati FavoritesVer este articulo en Español

Hi folks!
I have extended the Undo usage scripts to include two additional indicators:
1) undo change vector size statistics
2) Used undo records/blocks

and support for RAC infrastructure, so you can spot the hungriest UNDO eaters for any given instance.

Then the script for Oracle 11g is as follows:
set pagesize 400
set linesize 140
col name for a25
col program for a50
col username for a12
col osuser for a12
SELECT a.inst_id, a.sid, c.username, c.osuser, c.program, b.name,
a.value, d.used_urec, d.used_ublk
FROM gv$sesstat a, v$statname b, gv$session c, gv$transaction d
WHERE a.statistic# = b.statistic#
AND a.inst_id = c.inst_id
AND a.sid = c.sid
AND c.inst_id = d.inst_id
AND c.saddr = d.ses_addr
AND a.statistic# = 284
AND a.value > 0
ORDER BY a.value DESC

If you want to run this script on versions 10g1 and 10g2, just replace the statistic# with 176; 216 if your database is 11gR1... or use the following version independent script!!! (Hope we don't change the statistic name).

set pagesize 400
set linesize 140
col name for a25
col program for a50
col username for a12
col osuser for a12
SELECT a.inst_id, a.sid, c.username, c.osuser, c.program, b.name,
a.value, d.used_urec, d.used_ublk
FROM gv$sesstat a, v$statname b, gv$session c, gv$transaction d
WHERE a.statistic# = b.statistic#
AND a.inst_id = c.inst_id
AND a.sid = c.sid
AND c.inst_id = d.inst_id
AND c.saddr = d.ses_addr
AND b.name = 'undo change vector size'
AND a.value > 0
ORDER BY a.value DESC
 
Read the popular 2008 article Who is using your UNDO space?

Interested on MySQL Performance? Read  Book Review - High Performance MySQL


Subscribe to Oracle Database Disected by Email

Follow IgnacioRuizMX on TwitterDelicious
Bookmark this on Delicious

Wednesday, November 23, 2011

First pervasive-post

Today I'm writing this post from a Samsung Galaxy, my hand sized tablet, camera included, microphone and lots of fun.

I have been busy this year and that is thanks to the tremendous success of the Database Machine/Exadata.

Traveled all over the caribbean and south america, plus my first OOW experience as Oracle employee: always exciting!



Bandeja Paisa / Paisa "tray" - Medellin, Colombia


Sao Paulo Subway System

Oracle Open World 2011

Friday, July 15, 2011

The Oracle Exadata: latest Business Weapon

Add to Technorati Favorites
Ver este articulo en Español

Sorry I’ve been having a lot of work installing, maintaining and migrating to Database Machines all over Latinamerica for the latest 10 months; that means lot of travels abroad, all of them very exciting, but the real excitement comes from my job working with this wonderful Machine.

Was on the last project when realized how important is this investment for our customers, having this big caribbean Telco installed their first Exadata and later migrated their databases meant for them dramatic performance improvements and the ability to get more up-to-date analytical information and compete better and more agile.

Of course the Database Machine fulfilled expectations when we achieved a processing reduction of 6x on their nightly batch processes, going from 8-9hrs processing time, down-to 1.5-2hrs processing time. What this means: now they’re able to increase the refreshing time of their analysis tools 12 times a day and on demand, instead of having latest information one day behind.

Read my related article on Exadata Hybrid Columnar Compression

My steps to implement EHCC

Recommended reading on My Oracle Support notes that helped make this happen:

Migrating to Oracle Exadata Storage Server Paper (PDF)

Oracle Sun Database Machine Performance Best Practices [ID 1067520.1]



Subscribe to Oracle Database Dissected by Email
Follow IgnacioRuizMX on Twitter

Delicious Bookmark this on Delicious

Tuesday, November 30, 2010

My Steps to Implement Exadata EHCC

Add to Technorati Favorites


Ver este articulo en EspañolSubscribe to Oracle Database Disected by Email

Last time I was engaged with an Exadata migration, customer asked me about EHCC and how to implement it for their datamarts.

My approach (at that time) consisted on the following:
1) Get a sample of very big objects to play with
2) Used DBMS_COMPRESSION in order to get estimations for compression rates
3) Try every compression type: Basic, Query Low, Query High, Archive Low, Archive High and recorded size reductions, time to import and time for FTS.
4) Recorded and compared timings for various batch processes and important queries

With all those statistics, went to the customer and then he was able to take a decision based on nature of data, popular timeframe of querying within the data window, parallel degree, partitioning defined (BTW customer already had it on place, best practice!), and so on.

I've used Datapump to import because it uses direct path load and Basic compression needs it to kick in.

Further thinking on this matter resulted on proposals to include table usage (V$SEGMENT_STATISTICS) & table data dynamics (Monitoring+DBA_TAB_MODIFICATIONS) in order to fine tune the compression mode selected. The next time I've the opportunity to include this on the process, I'll share with you the results...

Thank you very much for your time, and if you liked this post please share with others freely...

Follow IgnacioRuizMX on Twitter

Delicious Bookmark this on Delicious
Custom Search