[wp-trac] [WordPress Trac] #33885: meta_form performs a potentially expensive query

WordPress Trac noreply at wordpress.org
Tue Nov 17 00:54:43 UTC 2015


#33885: meta_form performs a potentially expensive query
-------------------------------------+--------------------------
 Reporter:  jorbin                   |       Owner:
     Type:  defect (bug)             |      Status:  new
 Priority:  normal                   |   Milestone:  4.4
Component:  Administration           |     Version:  4.3.1
 Severity:  critical                 |  Resolution:
 Keywords:  has-patch needs-testing  |     Focuses:  performance
-------------------------------------+--------------------------

Comment (by ericmann):

 The expensive query bit me last Friday when it took a client site down.
 We're not using custom fields in the site, so the meta box had been
 disabled pretty much everywhere except for on Pages. Someone was editing a
 page and this triggered several instances of the query stacking up on the
 database layer and, eventually, locking things up entirely.

 The first change I attempted was altering the database to set the size of
 the `meta_key` column to the same as the index. That by itself went a long
 way to help - before the change, the original query took > 140s (s as in
 ''seconds'') - after the change, the same query took only 0.015s.

 So this is definitely a change I'd like to see us make.

 '''However''', the ask was on this specific patch. I've run both pre and
 post tests on Pages (again, the only post type utilizing custom fields):

 Trunk: 96.7036ms

 Patched: 39.9126ms

 This is showing significant improvement. I like how we're doing the
 filtering in PHP as it prevents running a super long query in the database
 that could lock things up. However, the looping behavior of the patch
 ''could'' result in running this new query multiple times. In my
 situation, it was only running once. If meta is built in such a way that
 this were to run more than twice, the overall performance here would
 actually be 'worse' than with the existing query.

 For comparison, running `EXPLAIN` on the existing query (without the DB
 schema change) shows the query is scanning 1.2 million rows in order to
 generate our list of 30 meta keys (this is a somewhat large database).
 Running `EXPLAIN` on the patched version of the query (again, without the
 DB schema change) shows that we're scanning ''even more'' rows (2.5
 million) to generate 115 records, and then filtering them out to just the
 30 we wanted.

 Trunk:
 {{{
 id, select_type, table,       type,  possible_keys, key,      key_len,
 ref, rows,    Extra
 1,  SIMPLE,      wp_postmeta, range, meta_key,      meta_key, 767,
 , 1244198, Using where; Using temporary; Using filesort

 }}}

 Patched:

 {{{
 id, select_type, table,       type, possible_keys, key, key_len, ref,
 rows,    Extra
 1,  SIMPLE,      wp_postmeta, ALL,               ,    ,        ,    ,
 2488394, Using temporary
 }}}

 So, in this one case, a single run of the query feels faster. But it
 ultimately grabs far more information and loads far more results into
 memory in order for PHP to do the filtering outside of the database. I'm
 convinced we have a problem, but I'm not sure this is the right solution.
 Great contender, but still raises some red flags for me.

--
Ticket URL: <https://core.trac.wordpress.org/ticket/33885#comment:11>
WordPress Trac <https://core.trac.wordpress.org/>
WordPress publishing platform


More information about the wp-trac mailing list