Interview with Angelica Kotliar

I recently had the opportunity to talk to Angelica, the model of all five covers of my Urban Ghoul series! I’ve never spoken to a book cover model before, and must admit never given it much thought. All I knew is that my cover designer, Erin, produces amazing covers.I thought you too might be interested in the […]


I’m interviewed for Blondie & The Brit


Featuring the fantastic author/writing podcast: Blondie & The Brit, who interviewed me last week. It was a lot of fun and we cover a variety of topics.

Play the interview

You can also right click on the destination page if you want to download it.




Christine Rains: The 13th Floor novellas

TheDragonslayercoverthumbPlease give a warm welcome to Christine Rains, author of the paranormal series The 13th Floor (among others). I’ve read the first two in the series: The Marquis and The Alpha and they’re super reads. Christine writes about very compelling, charismatic characters. The Dragonslayer is out now, and The Harbinger coming March 13th.

Graeme: How and why did you become an author?

CR: I’ve always been a writer. Ever since I could print, I wrote stories. I write because I love it. It took a long time to gain the confidence to publish, though. Perhaps age has toughened up my skin.

Graeme: Which of your books is your favourite, and why?

CR: Oh, this is tough. I love my concept for the 13th Floor series. I’m currently writing it, so I’m in the “in love” stage with it. Yet for my favorite, I have to say my Magena Silver trilogy. Magena is a unique protagonist. She challenges me as a writer. I love the cast of characters in the trilogy.

Graeme: Which of the 13th Floor occupants is your favourite, and why?

CR: Harriet the banshee. Did I answer that too quickly? I hope the others aren’t jealous. Harriet is a compassionate soul that was cursed and has to live a double life. She’s in love with a vampire, and the only time they see each other is when she’s in her hideous old hag form. I love her internal conflict and how that along with her curse makes her interact with the world.

Graeme: Which authors/books inspire you the most?

CR: Stephen King has been a great inspiration for me since I was a teenager. His worldbuilding and characters are phenomenal. I also love George R.R. Martin, J.K. Rowling, and Karen Marie Moning.

Graeme: Tell us some random thing about you that your readers might not know.

CR: I have lots of freckles and I hate them.

Graeme: Describe your writing environment.

CR: I have a little room stuffed full of books and games. The walls are a calming moss green and there’s paintings by my son on the walls. The desk is old and simple, but it’s still holding up. I always have a thesaurus beside me, and if I’m lucky, a sweet snack.

Graeme: Are you a plotter or a “pantser”?

CR: Pantser. I’ve tried to outline and become more of a plotter. Organization would save me time and I need that, but I’ve never been able to stick to an outline. My stories take a life of their own and go where they want. Luckily for me, it has worked out most of the time!

Graeme: Who would you most like to meet and what would you say to/ask them?

CR: I always have trouble with this question. I’d like to meet a lot of people, but I’m very shy. One day, I’d like to take a car trip (for a book tour, if I’m lucky!) across the country from one coast to the other and meet all my online friends.

Graeme: What’s next once you’ve finished with the 13th Floor? Tease us…

CR: I’m not sure which project I’ll pick up next. I have a couple of paranormal romance manuscripts I want to polish and query later this year. Possibly one involving a sexy witch who’s an expert at brewing love potions or a reclusive psychic that has fallen in love with a ghost.

Graeme: If you could live anywhere in the world, where and why?

CR: A log cabin in the woods by the Rocky Mountains. A cozy little home with not much upkeep. That way I’ll have time to write and travel.

Graeme: Awesome! Thanks sor answering those, and I can’t wait to see what you come up with next. Folks, check out Christine’s 13th Floor books below. Well worth a read.

Christine Rains

ChristineauthorsmallChristine Rains is a writer, blogger, and geek mom. She has four degrees which help nothing with motherhood but make her a great Jeopardy player. When she’s not reading or writing, she’s going on adventures with her son or watching cheesy movies on Syfy Channel. Christine is a member of Untethered Realms and S.C.I.F.I. She has twenty short stories and five novellas published.

Contact Christine via her Website, Facebook, Goodreads or Twitter as @CRainsWriter


TheDragonslayercoversmallOn the rooftop of neighboring building, dragonslayer Xanthus Ehrensvard fires at his target, Governor Whittaker. How he missed the shot, he doesn’t know, but fleeing the scene, he picks up an unwanted passenger. Gorgeous reporter Lois King saw Xan’s face, and she believes it’s the story to make her career. Except he can’t let her walk away knowing what he looks like. Xan has to show her the Governor is a bigger threat to the world than he is.??

Xan knows dragons never went extinct. They evolved with human society, taking on mortal forms, and slithered their way into positions of great influence and power, just like the Governor. But it’s no easy chore proving to someone that dragons still exist, and even more so, they’re disguised as famous people. Xan must convince Lois or find another way to silence her. An option, as he gets to know her, he likes less and less.

??After all, dragonslayers are no longer celebrated heroes but outlaws. Just as the dragons wish it. But this outlaw must make a plan to slay the dragon or risk its retribution.

Amazon | B&N | Kobo | Smashwords | Goodreads


Interview with me by Heather Day Gilbert

Recently, I was thrilled to answer some questions posed by a great writer friend of mine, Heather Day Gilbert. Among other projects, she is trying to find a publisher for her fantastic-sounding Viking novel. I can’t wait to read it.

In this interview, we talk about writing YA, world building and Indie publishing. It was a lot of fun. Thanks, Heather! :)


Interview with me about Indie Publishing

Hello all, and I hope everyone had a fabulous Christmas (or other holiday).

A little while ago, Candace, at the fabulous Candace’s Book Blog, invited me for an interview. I had a lot of fun answering her in-depth questions about Indie Publishing.

Take a look at the interview!

If you’re an avid reader, or interested in the latest publishing trends, I recommend you follow Candace.

Have a great New Years and I hope you all have a super 2013.



Paul McCullagh answers your questions about PBXT

Following on from our earlier announcement, Paul McCullagh has responded with the answers to your questions – as well as a few I gathered from other Percona folks, and attendees of OpenSQL Camp. Thank you Paul!

What’s the “ideal” use case for the PBXT engine, and how does it compare in performance?  When would I use PBXT instead of a storage engine like MyISAM, InnoDB or XtraDB?

Unfortunately it is not possible to point to a specific category of applications and say, “PBXT will be better here, so try it”.  PBXT is a general purpose transactional storage engine, designed to perform well on a broad range of tasks, much like InnoDB.  However, PBXT’s log-based architecture makes performance characteristics different to both MyISAM and InnoDB/XtraDB. Tests show that PBXT’s performance is similar to InnoDB but, depending on your database designed and the application, it can be faster.

PBXT is a community project and, of course, we depend on users trying it out. In the long run, this will determine to what extent we are able to continue to develop and improve the engine.  So, despite this rather vague answer, we are hoping that more people try it out, and work with us to improve the engine as necessary.  My thanks to all who are already doing this! :)

I think I remember reports that PBXT (at an early stage) out performed InnoDB with INSERTS and UPDATES (but not SELECTS). That would make PBXT very interesting for non-SELECT-intensive applications (finance, production management etc.) in my opinion.  Is this the case, and do you have any recent benchmarks available?

This is no longer necessarily the case. For example a test ( by Mark Callaghan shows that PBXT can actually out perform InnoDB with SELECTs under circumstances.

The implementation of full-durability has changed the performance characteristics of PBXT from “MyISAM-like” to more InnoDB-like. Originally PBXT was conceived as an engine that would be somewhere between MyISAM and InnoDB in both performance and features. The early version of PBXT was not fully durable (equivalent to innodb_flush_log_at_trx_commit=2).

A major change was completed at the beginning of last year with the implementation of full-durability. In doing this it was important to keep the log-based architecture which was the reason for the high write performance of earlier versions.

Traditional transactional implementations suffer from the problem that a backlog of asynchronous writes accumulate until it swamps the engine. There has been a lot of work on both InnoDB and XtraDB to solve this problem. The key words here are fuzzy and adaptive checkpointing (the former, originally implementation by Heiki for InnoDB, and the latter, an excellent addition to XtraDB).

Both methods improve the management of the asynchronous writes. The idea behind the log-based solution, on the other hand, is to avoid the accumulating a backlog of asynchronous writes, but writing synchronously.

Although write performance is comparable with InnoDB, I am not entirely convinced that PBXT’s implementation of the log-based I/O is optimal at this stage. This is ongoing work for PBXT 1.5.

Morgan notes: As well as Adaptive Checkpointing, Oracle has also been working on Adaptive Flushing for the InnoDB Plugin. The engine being ‘swamped’ problem that Paul is referring to is best described visually – see this post for more info.

What were the hard decisions or trade-offs that you had to make when designing PBXT?

If you read the white paper from 2006 ( you will notice that the original design was uncompromisingly MVCC-based.  Some of this has been changed to make PBXT more InnoDB-like, but other principles have remained.

Pure-MVCC does not do any locking. Read locks are not required because each transaction effectively gets its own snapshot of the database. And write locks are not acquired when updating. Instead, the application can hit a “optimistic lock error” if a record is updated by another user.

Now PBXT does acquire locks for 2 reasons: to support SELECT FOR UPDATE, and to avoid optimistic locking errors. This makes PBXT’s behavior identical to InnoDB in REPEATABLE READ mode.

On the other hand, there are currently no plans to implement InnoDB style “gap locking”. Gap locking effectively involves locking rows that do not exist. This, in turn, means that PBXT transactions are not SERIALIZABLE.  A result of this is that statement-based replication is not supported by the engine.

Another hard decision was not to implement clustered indexes which I mentioned in more details later.

How does online backup work in PBXT, and is incremental backup possible?

A recent version of PBXT (1.0.09) supports the MySQL Backup API which was originally implemented in MySQL 6.0. This feature is now scheduled for an upcoming version of MySQL 5.4.

The Backup API makes it possible to pull a consistent snapshot of an entire database even when tables use different engine types. The API does not yet support incremental backup, but this is planned.

Internally this feature is implemented by PBXT using an MVCC-based consistent snapshot.

Does PBXT have a maintenance thread like InnoDB’s main thread?

PBXT has several system threads, that are responsible for various maintenance tasks. The most important of these are the “Writer”, the “Sweeper” and the “Checkpointer”;

  • The Writer transfers data from the transaction log to the database table files. This task runs constantly and is the main source of asynchronous I/O in the engine.
  • The Sweeper is a thread unique to the PBXT architecture. It’s job is to clean up after a transaction. Basically it recognizes which record versions are no longer needed and deletes them from the database.
  • The Checkpointer flushes the data written by the Writer to disk. It is also responsible for writing consistent snapshots of the index files to disk. The frequency of checkpoints, as with other engines like InnoDB, determines the recovery time.

Does PBXT support clustered indexes?

No, currently it does not. This is one of the original design decisions (as raised by a previous question).  Two things contributed to this decision:

  • Supporting clustered indexes would have made the implementation of PBXT more complicated than I would have liked. My goal was to make PBXT as simple as possible, so that it is easy to maintain the code and add features.
  • I believe clustered indexes are becoming less relevant with the rise of Solid State technology. As random read access times decrease clustering of data will become less and less important.

What is the page size in PBXT, and can it be tuned?

PBXT uses 16K pages for the index data and (approximately) 32K pages for the table data. Both sizes can be set using compile time switches. However, if the index page size is changed, then the indices need to be rebuilt, which can be done by REPAIR TABLE.  The table data page size does not require a rebuild because a page of records in the table is just a group of records (not an actual fixed length page).

Are there any differences in the PBXT implementation of MVCC that might surprise experienced InnoDB DBAs?  Also – In MVCC does it keep the versions in indexes, and can PBXT use MVCC for index scans?

If you are using InnoDB in REPEATABLE READ mode, then there is essentially no difference in the isolation paradigm between the two engines.

REPEATABLE READ is often preferred over SERIALIZABLE mode because it allows a greater degree of concurrency while still providing the necessary transaction isolation. So I do not consider the lack of serializability as a serious deficit in the engine. And, fortunately MySQL 5.1. supports row-based replication which makes it possible to do replication while using REPEATABLE READ.

PBXT does use MVCC to do index scans. Basically this means that all types of SELECTs can be done without locking.

Morgan notes: Indexes not using MVCC is one of the main differences in the Falcon storage engine.

PBXT supports row-level locking and foreign keys. Does this create any additional locking overhead that we should be aware of?

Firstly, PBXT does not acquire read locks. A normal SELECT does not lock at all. In addition, an UPDATE or DELETE only acquires a temporary row-lock. This lock is released when the row is updated or deleted because the MVCC system can detect that a row has been changed (and is therefore write locked).

This means that PBXT does not normally need to maintain long lists of row-level locks. This is also the case when a foreign key leads to cascading operations which can affect thousands of rows.

The only case you need to be aware of is SELECT FOR UPDATE. This operation acquires and holds a row-level lock for each row returned by the SELECT. These locks are all stored in RAM. The format is quite compact (especially when row IDs are consecutive) but this can become an issue if millions of rows are selected in this manner.

I should also mention that the consequence of this is that SELECT … LOCK IN SHARE MODE is currently not supported.

When I evaluate a storage engine my key acceptance criteria are things like backup, concurrency, ACID compliance and crash recovery. As a storage engine developer, what other criteria do you think I should be adding?

Yes, I think you have mentioned the most important criteria. What I can add to this list are 3 things that make developing a storage engine extremely demanding: performance, stability and data integrity.

Of course, as a DBA or database user these aspects are so basic that they are taken for granted.

But engine developers need to keep performance, stability and data integrity in mind constantly. The problem is, they compete with each other: increasing performance often causes instabilities that then have to be fixed. How to optimize the program without compromising data integrity is a constant question.

Relative to maintaining performance, stability and data integrity, adding features to an engine is easy. So I would say that these are the criteria that concern a developer the most.

MySQL supports a “pluggable storage engine API”, but it seems that not all the storage engine vendors are able to keep all their code at that layer (Infobright had to make major changes to MySQL itself).  What war stories can you report on in plugging into MySQL?

Unfortunately the “war” continues. I have already received several e-mails that PBXT does not compile with the recently released MySQL 5.1.41!

Any dot release can lead to this problem, and I think PBXT is fairly moderate with its integration into MySQL.

My main advantage: I have been able to avoid modifying any part of MySQL to make the engine work. This means that PBXT runs with the standard MySQL/MariaDB distribution.

But this has required quite a bit of creative work, in other words, hacks.

One of the main problems has been running into global locks when calling back into MySQL to do things like open a table, create a session structure (THD) or create a .frm file.

One extreme example of this is PBXT recovery. When MySQL calls the engine “init” method on startup it is holding the global LOCK_plugin lock. In init, the engine needs to do recovery. In PBXT’s case this means opening tables (reading a .frm file), which requires creating a THD. The code to create a THD in turn tries to acquire LOCK_plugin!

Unfortunately a thread hangs if it tries to acquire the same mutex twice, so this just does not work!

We went through quite a few iterations (MySQL code was also changing during the development of 5.1) before we came up with the current solution: create a background thread to do recovery asynchronously. So the thread can wait for the LOCK_plugin to be unlocked before it continues.

The affect is that the init function returns quickly, but the first queries that access PBXT tables may hang waiting for recovery to complete.

PBXT seems to have very few configuration parameters.  Was this an intentional design decision, and do you see it creating opportunities for you in organizations with less internal IT-expertise?

No, this is not by design.

While I try to only add tuning parameters that are absolutely necessary, PBXT is not specifically designed to be self-tuning, because I believe that is a very hard problem to solve in general.

Tuning parameters are often added to an engine in response to performance problems in particular configurations. This is not necessarily a bad thing because it provides DBA’s with the tools they need.

My goal for PBXT in this regard is twofold:

  1. I hope that most installations can get away with setting a minimum of tuning parameters, in particular, just the cache values.
  2. On the other hand, I aim to provide expert tuning parameters for installations that need to extract maximum performance from the hardware. This work is ongoing.

Morgan notes: There are more in InnoDB/XtraDB now than there were three years ago. This is probably something that emerges over time as we get to understand more about an engine.

Entry posted by Morgan Tocker |

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Powered by WordPress | Theme: Aeros 2.0 by