-
Hyperic plugin for Cloud Foundry
Hyperic + Cloud Foundry
Cloud Foundry is the revolutionary open platform as a service from VMware that supports multiple frameworks, application services, and clouds. The vFabric Hyperic team is pleased to announce the availability of the free Cloud Foundry plugin for Hyperic that brings Hyperic’s proven ability to monitor, alert, and control application infrastructure resources to Cloud Foundry’s applications and services.
Overview and Features
Cloud Foundry’s VMC command line interface allows you to deploy and manage your applications running on CloudFoundry.com, but serious users require the automation and continuous monitoring capabilities from full-fledged management tools. Therefore, we have developed the Cloud Foundry plugin for Hyperic that utilizes the same APIs that VMC uses to talk to the Cloud Controller, and brings all of the information and capabilities into a dashboard GUI view while tracking metric data and events for historical purposes. The Hyperic plugin communicates with CloudFoundry.com remotely, so it’s easy to deploy into any existing Hyperic instances running in your data center. Some of the features include:
Auto-discovers and collects metrics about Cloud Foundry system and account usage
Auto-discovers and collects metrics for Cloud Foundry provisioned services, including MongoDB, MySQL, Redis, and RabbitMQ (once available).
Auto-discovers and collects metrics for Cloud Foundry applications
Enables control actions to manage Cloud Foundry applications
Start an application
Stop an application
Restart an application
Update reserved memory for an application
Update the number of instances for an application
Scale up an application by 1 instance
Scale down an application by 1 instance
Performs event tracking of Cloud Foundry application crashes
Auto-syncs the Hyperic inventory when applications and services are created or deleted from Cloud Foundry
Benefits
Using Hyperic in conjunction with Cloud Foundry account gives you a lot of benefits and control. Here are just some of the benefits that you’ll get when using them together.
Create alerts to notify or fix application runtime issues
Visual dashboard view of health and configuration of all applications in Cloud Foundry whether or not they are running
Review application deployment, availability, and resource consumption history
On-demand, scheduled, or automated control actions to start, stop, restart, re-configure, or scale Cloud Foundry applications
Track events when applications crash or change state
Compare resource utilization against user quota
…and many more
In addition, you can combine these metrics with other existing Hyperic services, such as HTTP or ping checks, to get an even more comprehensive view of what is happening with your running applications, including response time and availability (from the client perspective).
Installation and Configuration
The Cloud Foundry plugin for Hyperic is now available on HyperForge. Follow the Configuration Instructions section to download and install and configure the plugin on both Hyperic server and agent, and you’ll be monitoring your Cloud Foundry account in minutes.
Note that because the Cloud Foundry server is created manually, the Hyperic agent will not gather the user account properties very quickly after provisioning since it wasn’t auto-discovered. To expedite the properties discovery process, you may choose to restart the agent to kickstart the data gathering. You can start the agent remotely by navigating to the agent resource in the Hyperic UI, click on Views tab and then Agent Commands. Here you can select ‘restart’ to restart the Hyperic agent and have it report your Cloud Foundry user account info immediately.
Additional Information
Here are some additional source of information regarding this plugin:
A screencast demonstrating how to create the Cloud Foundry resource, create alerts, and perform control actions
If you are not yet a Cloud Foundry user, go sign up for an account on the current beta service at CloudFoundry.com
If you are not familiar with Hyperic and would like to find out more, please go to vFabric Hyperic
Hyperic HQ is the open source Hyperic and is fully compatible with the Cloud Foundry plugin, find out more about it at the Hyperic Community
-
Shinguz's Blog (en): MySQL Query Cache does not work with Complex Queries in Transactions
We did recently a review of one of our customers systems and we found that the Query Cache was disabled even thought it had significant more read than write queries.
When we asked the customer why he has not enabled the Query Cache he mentioned a review that was done a few years ago and which stated that the Query Cache hit ratio was non optimal.
This was verified on a testing system which had the Query Cache enabled by accident.
But we all thought that the Query Cache would make sense in this situation so we investigated a bit more.
They have a Java application where they do pretty complex queries (10 to 30-way Joins) and they Connect with Connector/J to the database. We tried it out in the application on a dedicated system and verified that the Query Cache was not serving our queries but the query did a full dive to the data.
So first we were looking in the MySQL documentation if there is anything stated why the queries could not be stored in the Query Cache.
There are many situation when the query cache cannot be used [How the Query Cache Operates] but non of those situations matched to our case. But it was clearly stated: The query cache also works within transactions when using InnoDB tables.
In an old but usually reliable source from 2006 we found the statement: Might not work with transactions [MySQL Query Cache]. This looks a bit suspicious...
To find out why the Query was not served from the Query Cache, we enabled the General Query Log and cut out the sequence which was not working as expected.
The sequence sent by Connector/J looks as follows(1):
AUTOCOMMIT=0;
SELECT complex Query;
COMMIT;
ROLLBACK;
AUTOCOMMIT=1;This sequence we were running manually in the mysql client twice (to see if Query Cache was used).
Then we did the same thing in the mysql client with the following "sequence" (2) twice as well:
SELECT complex Query;
When we were comparing the MySQL GLOBAL STATUS variables we found the following:
Status
before tests
after (1)
after (1)
after (2)
after (2)
Qcache_hits
3
3
3
3
4
Qcache_inserts
47
48
48
49
49
Qcache_not_cached
46
46
47
47
47
Qcache_queries_in_cache
0
1
1
2
2
Com_select
91
92
93
94
94
It looks like the complex Query is cached in the Query Cache within a Transaction started with AUTOCOMMIT but then not served in a second request. When the same complex Query is run with AUTOCOMMIT enabled it is served from the Query Cache as expected but the first Query does NOT see the cached Query from Sequence (1)!
This could be a possible explanation why the Query Cache in our customers situation had a very bad Hit-Ratio.
Unfortunately we could not reproduce this problem with a simple query on our own testing systems. But we are working on it and try to figure out when and why it happens.
This problem affects possibly all Java application using the Connector/J with transactions and possibly other programming languages as well which will run the same sequences of commands. Further it looks like it only affects complex joins.
A way out of this situation would be to not use transactions :( or to not use too complex multi-join queries.
The tests where done with MySQL 5.1.34 and newer.
If you can reproduce this behavior please let us know.
-
Reasons to use MySQL 5.5 Presentation
I recently gave a presentation at the New York Effective MySQL Meetup on the new features of, and some of the compelling reasons to upgrade to MySQL 5.5. There are also a number of new MySQL variables that can have a dramatic effect on performance in a highly transactional environment, innodb_buffer_pool_instances and innodb_purge_threads are just two to consider.
For more information on all the new variables, status, reserved words and benchmarks of new features you can Download Presentation Slides.
-
How the MariaDB download system works
During my years at MySQL AB I had the unfortunate task of manually maintaining the download page for enterprise customers. This involved a ton of boring, error prone work and almost always led to some sort of error every release. Some of our downloads were eventually replaced with an automated system written by the web team but the memory of all that time wasted still hurts me. So when I joined Monty Program and saw our downloads were manually maintained in mediawiki I knew something had to change.
Most of the websites for Monty Program and the MariaDB project are written with Django so this is where I started. I used our existing website code base and just created a new django application for downloads. There are many models / tables involved in the system but the important ones are:
Releases: A list of all the releases we have made, i.e. MariaDB 5.2.7, MariaDB 5.1.55, etc
Files: The individual files that make up a release.
Mirrors: The information (name, url, location) of the MariaDB mirrors.
Rules: This is the heart of the system and controls how a file name gets assigned to a release and its various other attributes such as OS and release.
When a MariaDB release is ready to publish our release coordinator pushes the files to our primary mirror and tells the download management system to check for a new release. The system scans the mirror and captures the information (name, size, directory) of new files. The system then loops through each rule in order and checks if it applies.
A rule is basically a regular expression and then a snippet of python code to run. Massive regular expressions are always a pain to work with so we try to keep the rules as simple as possible. For example, this is one of our rules.
Name: CPU – x86_64
Regex: .*x86_64.*
Code: file.cpu = ‘x86_64′
Some rules obviously are more complex, but this is a good example of what we aim for. It is easy to understand and if something needs to be changed it can be done easily. The file object in the code section is a helper object to make writing the rules easier by hiding the actual complexities of the underlying objects. I considered using some sort of rules engine but decided that added unneeded complexity (the top answer on this question helped shape my opinion: http://stackoverflow.com/questions/467738/implementing-a-rules-engine-in-python)
Once all the rules have been applied the release coordinator takes a final look and publishes the release. If there is a problem later, the whole release or individual files can be pulled.
The front end is fairly straightforward and there isn’t much to discuss but here are a few highlights:
The file listing is loaded via ajax so applying filters is fast.
Your mirror is picked by first looking at your country then your continent. If we have someone trying to download from Antarctica a random mirror will be chosen.
That in a nut shell is how our downloads system works. If anyone has any questions about it I’m happy to answer, either in the comments or Freenode #maria.
-
Ning Tech Talk on MySqL - Date Correction
Apologies for the scheduling conflict but the date has moved to the 20th of July.
Save the date: July 20, 2011
RSVP here: Ning-Tech-Talks
There have been many database infrastructure changes through Ning’s history since the company began in 2004. Our most recent, and hopefully final, database iteration is running on MySQL. Over the past year we’ve designed and implemented a stable and highly available MySQL environment. MySQL was chosen to reduce Ning’s total cost of ownership and increase overall availability to our customers, Network Creators and their members.
Led by Chris Schneider, Ning’s MySQL Architect, join us at this Ning Tech Talk as he explains how we implemented and currently maintain MySQL on the Ning Platform.
Pizza, beer and other refreshments will be served. We'll begin with a quick series of informal "Lightning Talks" -- guests can present active projects or interests they're working on. If you'd like to present, there's a spot to propose your topic when you RSVP. There will also be time to ask Ning's Engineering and Ops teams any burning questions you have! We look forward to seeing you at Ning HQ in downtown Palo Alto!
Hope to see you there.
Chris
|