https://www.rpatterson.net/Ross Patterson's Blog - Posted in 20082023-08-22T15:29:27.239283+00:00ABloghttps://www.rpatterson.net/blog/functional-benchmarking-accessibility/Functional Benchmarking Accessibility2009-01-07T00:00:00+00:00Ross Patterson<section id="functional-benchmarking-accessibility">
<blockquote>
<div><p>More progress on load test benchmarking</p>
</div></blockquote>
<p>Yesterday was another great day at the Plone Performance Sprint in
Bristol, UK.</p>
<p>I continued working with the <a class="reference external" href="http://www.openplans.org/projects/plone-performance-sprint-2008/standard-performance-scalability-test-suite-buildout">load test benchmarking</a>
team yesterday. One of the more enjoyable aspects of our teams work
is how natural and effective the division of labor has been. <a class="reference external" href="http://www.openplans.org/people/tomster/profile">Tom</a> and I worked on
the funkload buildout and the Plone core load read-only tests.
<a class="reference external" href="http://www.openplans.org/people/amleczko/profile">Andrew</a> built on
the read-only tests to produce a write-heavy load test. Ed and <a class="reference external" href="http://www.openplans.org/people/russf/profile">Russ</a> have been working at
least in part on different content profiles against which to run the
different test scenarios.</p>
<p>Toward the end of the day, Tom and I moved onto working on making the
funkload more generally usable to the wider Plone ecosystem. One of
the first things I did after having a buildout that could run
read-only load test benchmarks was to install and turn on CacheFu
without a cache proxy. Then I ran the benchmarks again and had
Funkload plot some pretty benchmark diff graphs.
Tom started working on packaging this extension of the buildout as a
sample so that add-on maintainers and integrators can see how to do
the same for other add-ons. Then they can easily compare how their
add-on affects base plone performance using funkload benchmark diffs.
Funkload rocks! Meanwhile, I began work on making the funkload script
invocations simpler and more familiar to those of us in the
zope.testing world.</p>
<p>One goal here is to make load test benchmarking more accessible in
general. Ideally, an integrator who is savvy enough to work with
buildout, can use the collective.loadtesting buildout or extend it,
record a new Funkload test using the recorder proxy. Then they can
post the resulting test module and configuration with their problem
report or question. Part of me shudders at the thought of encouraging
broader access to benchmarking, especially since it’s so easy to
create unrepresentative benchmarks. I think, however, that drawing
back the curtains on Plone performance to expose both the positive and
the negative, even if messy, can be best in the end.</p>
<p>Meanwhile, the Andrew’s write-heavy load tests reproduced the
write-concurrency ZODB conflict bug that has recently been discussed
on the lists. This is actually my big yin so I’m totally stoked to
see some light being shed on this. The test scenario registers a new
user, logs them in, goes to their member folder, adds a folder, adds a
page to the new folder with lipsum field values, and logs out. The
problems began to show themselves pretty heavily starting at about 5
concurrent users hitting one instance. After brainstorming with
Lawrence, Andrew began generating load test diffs after experimenting
with changes to try and isolate the write concurrency bug. First, Andrew looked into whether the response was being rendered before
hitting a conflict error and thus being forced to render it again when
it retries. The idea was that this could extend the duration of the transaction long enough to
significantly increase conflicts. Archetypes does already, however, do a redirect after successful edit. We do have
many more ideas to test out and now we have real measurements. The
day ended with Andrew factoring out the member registration part of
the test scenario to try and isolate the problem further.</p>
<p>Today Tom and I will likely focus on further polishing and documenting
the buildout for release to the community. We’ll probably also work
on the buildbot configuration that <a class="reference external" href="http://www.openplans.org/people/witsch/profile">Andreas</a> provided. We want
to package it to be run against Plone core development on a regular
basis. It would be great to have a set of pages available to view the
diff of performance for the last day of changes, the last week of
changes, the last month of changes, etc..</p>
<div class="note update admonition">
<p class="admonition-title">Updated on 07 January 2009</p>
<p>Imported from Plone on Mar 15, 2021. The date for this update is the last
modified date in Plone.</p>
</div>
</section>
More progress on load test benchmarking2008-12-12T00:00:00+00:00https://www.rpatterson.net/blog/at-the-plone-performance-sprint/At the Plone Performance Sprint2008-12-12T00:00:00+00:00Ross Patterson<section id="at-the-plone-performance-sprint">
<blockquote>
<div><p>In Bristol helping make Plone go faster</p>
</div></blockquote>
<p>I’m at the <a class="reference external" href="http://www.openplans.org/projects/plone-performance-sprint-2008">Plone Performance Sprint</a> in
Bristol, UK and I’m having a total blast.</p>
<p>First and foremost, it’s a great group. I’m just having too much fun
working in person with all these people I’ve heretofore only known
through the interwebs. The <a class="reference external" href="http://www.openplans.org/projects/plone-performance-sprint-2008/topics">topics</a>
brainstorming session yielded a lot of great ideas and potential
directions. I kinda resent having to choose between the
instrumentation and <a class="reference external" href="http://www.openplans.org/projects/plone-performance-sprint-2008/standard-performance-scalability-test-suite-buildout">load testing</a>
topics. :)</p>
<p>Florian has a lot of nifty ideas about instrumenting various levels of
the Plone stack to get meaningful performance data. This is sorely
needed. Theory and guessing in discussions about Plone performance is
all well and good, but as we all know, measure, don’t guess.
Florian’s instrumentation effort stands to get us good measurements of
things ranging from pickle retrieval on the ZODB level all the way up
to viewlet rendering time in the UI. I’m definitely looking forward
to using whatever they produce. I can’t say much right now, but
hopefully in the near future we’ll all be hearing from Mr. Bent. :)</p>
<p>In the end I’ve decided to go with the <a class="reference external" href="http://www.openplans.org/projects/plone-performance-sprint-2008/standard-performance-scalability-test-suite-buildout">load testing topic</a>.
I’ve been wanting good baseline metrics for Plone performance for some
time. Every now and then, a Plone rock star does some profiling and
finds some code and applies a two line change that increases
performance by some ridiculous factor. While certainly not the
rockstar’s fault and not to denigrate the rockstar’s contribution, but
this should never happen. Something should have alerted us that a
hotspot was introduced very shortly after it was introduced. Our hope
is that having a basic set of load tests run by buildbot, we’ll know
when changes are made that impact performance. There are other goals
you can read about on the <a class="reference external" href="http://www.openplans.org/projects/plone-performance-sprint-2008/standard-performance-scalability-test-suite-buildout">wiki</a>,
but this is my primary goal. I hope to be a part of making work on
Plone performance boringly predictable. Lets take the mystery out of
it. :)</p>
<p>After the brainstorming and topic selection and such, we got a bit of
a start on the load testing story. The first question was which tool
to use, for which there were basically only two contenders: JMeter,
and Funkload. I started out advocating for JMeter. I’d had a brief
exposure to Funkload and had a bad experience with it, though I can’t
remember why any more. I built a very intricate load test suite with
JMeter after that. It did everything I needed it to do, and the
capacity to slice and dice the reports and graphs using the UI is
great, but everything else sucks. The UI sucks. Using regexps for
the things JMeter uses them for sucks. Using Java sucks. Still I
advocated for it because it does what it says it will do quite
admirably.</p>
<p>At this sprint, many were also under the impression that we should use
JMeter, but there are also a handful of Funkload lovers. Through the
subsequent discussion and experimentation, I think most, if not all,
of us in the JMeter camp have been thoroughly converted. Now that I
understand Funkload better, I see that I give up nothing I really need
and I gain… well, Python!</p>
<p>One idea I’m not sure I’ll have time to explore is integrating
testbrowser and funkload. If I could make that work then I can write
testbrowser doctests that can be run as load tests with full reporting
options! /me swoons</p>
<p>Today we’ll be getting started with the actual load tests. Good
times!</p>
<div class="note update admonition">
<p class="admonition-title">Updated on 12 December 2008</p>
<p>Imported from Plone on Mar 15, 2021. The date for this update is the last
modified date in Plone.</p>
</div>
</section>
In Bristol helping make Plone go faster2008-12-12T00:00:00+00:00https://www.rpatterson.net/blog/new-membrane-and-remember-maintainer/New membrane and remember Maintainer2008-12-02T00:00:00+00:00Ross Patterson<section id="new-membrane-and-remember-maintainer">
<blockquote>
<div><p>Rob Miller announced today that I’ll be the new maintainer</p>
</div></blockquote>
<p>In a <a class="reference external" href="http://www.openplans.org/projects/remember/lists/remember/archive/2008/12/1228256501493/forum_view">post</a>
to the remember list today, Rob Miller announced what we’ve been
discussing for a while, I’m now the new maintainer for <a class="reference external" href="http://plone.org/products/membrane">membrane</a> and
<a class="reference external" href="http://plone.org/products/remember">remember</a>. I was stoked when he asked and since then we’ve had some
great discussions so I’m even more stoked now. :)</p>
<p>I’ve subsequently posted a <a class="reference external" href="http://www.openplans.org/projects/remember/lists/remember/archive/2008/12/1228267212368/forum_view">survey</a>
in hopes of getting a sense of what people are using membrane and
remember for and what direction they’d like to see them take. It
should only take a few seconds to respond to so please do if you have
any interest in membrane or remember at all.</p>
<p>I’m looking forward to working with Rob and to help keep membrane and
remember moving forward!</p>
<div class="note update admonition">
<p class="admonition-title">Updated on 02 December 2008</p>
<p>Imported from Plone on Mar 15, 2021. The date for this update is the last
modified date in Plone.</p>
</div>
</section>
Rob Miller announced today that I’ll be the new maintainer2008-12-02T00:00:00+00:00https://www.rpatterson.net/blog/collective.securitycleanup/collective.securitycleanup2008-12-01T00:00:00+00:00Ross Patterson<section id="collective-securitycleanup">
<blockquote>
<div><p>GenericSetup handlers to restore Zope security to defaults</p>
</div></blockquote>
<p>WARNING: Backup your ZODB before using this package!</p>
<p>The Zope 2 security framework is very powerful and one of it’s
greatest strengths. A lot of it’s power comes from it’s
flexibility. Exposing that power to site adminsitrators often ends up
giving them enough rope to hang themselves with. This is exactly what
the “Security” tab in the ZMI does.</p>
<aside class="system-message">
<p class="system-message-title">System Message: INFO/1 (<span class="docutils literal">/builds/rpatterson/ross-pattersons-site/blog/collective.securitycleanup/index.rst</span>, line 15); <em><a href="#id1">backlink</a></em></p>
<p>Duplicate implicit target name: “collective.securitycleanup”.</p>
</aside>
<p>In many cases, a site admin or consultant is faced with the daunting
task of restoring all the security settings throughout the Zope object
heirarchy in order to bring sanity and predictability back to the
site. The <a class="reference external" href="http://pypi.python.org/pypi/collective.securitycleanup">collective.securitycleanup</a> package
provides GenericSetup handlers for restoring the role mappings and
local roles back to their defaults. This handler can be used in
combination with existing handlers to set role mappings and to
re-apply workflow security settings to help start the process of
security cleanup.</p>
<p>The clean up is performed on all ancestors including the Zope
application root and by walking down the heirarchy to all
descendants. This means all descendents of the context the handler is
used on and all ancestors of the context including the root will be
cleaned up. It will not clean up siblings or anything else that is not
a direct ancestor to the context.</p>
<p>The clean up removes all permission settings stored on the instance
which effectively restores them to code defaults. The clean up also
removes all local roles except the “Owner” role for the user returned
by OFS.interfasces.IOwned.getOwnerTuple() if already assigned.</p>
<p>Use of this tool will likely only ever be a starting point. So be sure
to test thoroughly before deploying to your production server and
backup your ZODB before using it.</p>
<div class="note update admonition">
<p class="admonition-title">Updated on 01 December 2008</p>
<p>Imported from Plone on Mar 15, 2021. The date for this update is the last
modified date in Plone.</p>
</div>
</section>
GenericSetup handlers to restore Zope security to defaults2008-12-01T00:00:00+00:00https://www.rpatterson.net/blog/evaluating-add-ons/Evaluating Add-Ons2008-12-01T00:00:00+00:00Ross Patterson<section id="evaluating-add-ons">
<blockquote>
<div><p>What is the risk of adding a given dependency?</p>
</div></blockquote>
<p>As often happens, a client asked me about adding a given package to
their buildout. I just realized that I have a standard response to
this so that might be something worth documenting for the wider
community of admins and integrators. My criteria are as follows
roughly in order of priority:</p>
<blockquote>
<div><ul>
<li><p>Good backing:</p>
<p>Find the author or maintainer on the plone.org/products page or
on PyPI. Are they a company or individual that is widely known
in the community?</p>
</li>
<li><p>Recent versions supported:</p>
<p>Does the add-on support recent versions of its own dependencies?
For a Plone package today this means checking if it support
Plone 3.</p>
</li>
<li><p>Final release:</p>
<p>Does the package have a final release that supports recent
versions of its dependencies?</p>
</li>
<li><p>Released as an egg on a package index:</p>
<p>Is the add-on packaged as an egg and available on an
easy_install accessible index such as PyPI?</p>
</li>
</ul>
</div></blockquote>
<p>Test coverage should, of course, be criteria #1. Sadly, we don’t have
a good enough test isolation story with ZTC and PTC, test failures in
the client’s buildout don’t necessarily mean the add-on is broken. I
know there’s been talk about integrating test coverage data into
plone.org/products but it’s not here yet. The holy grail would be
some sort of integration with distutils/setuptools. It would be great
if I could do “python setup.py test –coverage” so that a subsequent
“python setup.py register sdist upload” would incorporate the coverage
data into the release meta-data such that it would be reported on the
PyPI page. Certainly any such solution could be circumvented by the
releaser, but I think that would be very rare, that it would more
often encourage more maintainers to have good test coverage and this
would be a huge win overall.</p>
<p>It’s also interesting to note that I find recent version support more
indicative of risk than final release status. This comes from
experience. I’ve worked on too many projects of all sizes that had
production sites running release candidates, betas, alphas, and even
some development versions. One conclusion would be that this means
that the software is of poor quality. I don’t think that’s it. I
think it’s that we, as a software ecosystem, are bad at releasing.
Now eggs are certainly helping with this, but I think we also need to
start an aggressive honor/shame (carrot/stick) campaign about
releasing. Actually, maybe we should have an honor/shame campaign
about add-ons in general. Perhaps a montly post to planet plone and
the mailing list with a Hall of Fame and a Hall of Shame? :)
Something that considers release status (how long has it been since
that beta release? 3 months? Maybe you can release a final
version?), test coverage, and meta-data cleanliness, etc.</p>
<div class="note update admonition">
<p class="admonition-title">Updated on 01 December 2008</p>
<p>Imported from Plone on Mar 15, 2021. The date for this update is the last
modified date in Plone.</p>
</div>
</section>
What is the risk of adding a given dependency?2008-12-01T00:00:00+00:00