[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

12. Regression Testing XEmacs

12.1 How to Regression-Test  
12.2 Modules for Regression Testing  


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

12.1 How to Regression-Test

The source directory `tests/automated' contains XEmacs' automated test suite. The usual way of running all the tests is running make check from the top-level build directory.

The test suite is unfinished and it's still lacking some essential features. It is nevertheless recommended that you run the tests to confirm that XEmacs behaves correctly.

If you want to run a specific test case, you can do it from the command-line like this:

 
$ xemacs -batch -l test-harness.elc -f batch-test-emacs TEST-FILE

If a test fails and you need more information, you can run the test suite interactively by loading `test-harness.el' into a running XEmacs and typing M-x test-emacs-test-file RET <filename> RET. You will see a log of passed and failed tests, which should allow you to investigate the source of the error and ultimately fix the bug. If you are not capable of, or don't have time for, debugging it yourself, please do report the failures using M-x report-emacs-bug or M-x build-report.

Command: test-emacs-test-file file
Runs the tests in file. `test-harness.el' must be loaded. Defines all the macros described in this node, and undefines them when done.

Adding a new test file is trivial: just create a new file here and it will be run. There is no need to byte-compile any of the files in this directory--the test-harness will take care of any necessary byte-compilation.

Look at the existing test cases for the examples of coding test cases. It all boils down to your imagination and judicious use of the macros Assert, Check-Error, Check-Error-Message, and Check-Message. Note that all of these macros are defined only for the duration of the test: they do not exist in the global environment.

Macro: Assert expr
Check that expr is non-nil at this point in the test.

Macro: Check-Error expected-error body
Check that execution of body causes expected-error to be signaled. body is a progn-like body, and may contain several expressions. expected-error is a symbol defined as an error by define-error.

Macro: Check-Error-Message expected-error expected-error-regexp body
Check that execution of body causes expected-error to be signaled, and generate a message matching expected-error-regexp. body is a progn-like body, and may contain several expressions. expected-error is a symbol defined as an error by define-error.

Macro: Check-Message expected-message body
Check that execution of body causes expected-message to be generated (using message or a similar function). body is a progn-like body, and may contain several expressions.

Here's a simple example checking case-sensitive and case-insensitive comparisons from `case-tests.el'.

 
(with-temp-buffer
  (insert "Test Buffer")
  (let ((case-fold-search t))
    (goto-char (point-min))
    (Assert (eq (search-forward "test buffer" nil t) 12))
    (goto-char (point-min))
    (Assert (eq (search-forward "Test buffer" nil t) 12))
    (goto-char (point-min))
    (Assert (eq (search-forward "Test Buffer" nil t) 12))

    (setq case-fold-search nil)
    (goto-char (point-min))
    (Assert (not (search-forward "test buffer" nil t)))
    (goto-char (point-min))
    (Assert (not (search-forward "Test buffer" nil t)))
    (goto-char (point-min))
    (Assert (eq (search-forward "Test Buffer" nil t) 12))))

This example could be saved in a file in `tests/automated', and it would constitute a complete test, automatically executed when you run make check after building XEmacs. More complex tests may require substantial temporary scaffolding to create the environment that elicits the bugs, but the top-level `Makefile' and `test-harness.el' handle the running and collection of results from the Assert, Check-Error, Check-Error-Message, and Check-Message macros.

Don't suppress tests just because they're due to known bugs not yet fixed--use the Known-Bug-Expect-Failure wrapper macro to mark them.

Macro: Known-Bug-Expect-Failure body
Arrange for failing tests in body to generate messages prefixed with "KNOWN BUG:" instead of "FAIL:". body is a progn-like body, and may contain several tests.

A lot of the tests we run push limits; suppress Ebola warning messages with the Ignore-Ebola wrapper macro.

Macro: Ignore-Ebola body
Suppress Ebola warning messages while running tests in body. body is a progn-like body, and may contain several tests.

Both macros are defined temporarily within the test function. Simple examples:

 
;; Apparently Ignore-Ebola is a solution with no problem to address.
;; There are no examples in 21.5, anyway.

;; from regexp-tests.el
(Known-Bug-Expect-Failure
 (Assert (not (string-match "\\b" "")))
 (Assert (not (string-match " \\b" " "))))

In general, you should avoid using functionality from packages in your tests, because you can't be sure that everyone will have the required package. However, if you've got a test that works, by all means add it. Simply wrap the test in an appropriate test, add a notice that the test was skipped, and update the skipped-test-reasons hashtable. The wrapper macro Skip-Test-Unless is provided to handle common cases.

Variable: skipped-test-reasons
Hash table counting the number of times a particular reason is given for skipping tests. This is only defined within test-emacs-test-file.

Macro: Skip-Test-Unless prerequisite reason description body
prerequisite is usually a feature test (featurep, boundp, fboundp). reason is a string describing the prerequisite; it must be unique because it is used as a hash key in a table of reasons for skipping tests. description describes the tests being skipped, for the test result summary. body is a progn-like body, and may contain several tests.

Skip-Test-Unless is defined temporarily within the test function. Here's an example of usage from `syntax-tests.el':

 
;; Test forward-comment at buffer boundaries
(with-temp-buffer
  ;; try to use exactly what you need: featurep, boundp, fboundp
  (Skip-Test-Unless (fboundp 'c-mode)
                    "c-mode unavailable"
                    "comment and parse-partial-sexp tests"
    ;; and here's the test code
    (c-mode)
    (insert "// comment\n")
    (forward-comment -2)
    (Assert (eq (point) (point-min)))
    (let ((point (point)))
      (insert "/* comment */")
      (goto-char point)
      (forward-comment 2)
      (Assert (eq (point) (point-max)))
      (parse-partial-sexp point (point-max)))))

Skip-Test-Unless is intended for use with features that are normally present in typical configurations. For truly optional features, or tests that apply to one of several alternative implementations (eg, to GTK widgets, but not Athena, Motif, MS Windows, or Carbon), simply silently suppress the test if the feature is not available.

Here are a few general hints for writing tests.

  1. Include related successful cases. Fixes often break something.

  2. Use the Known-Bug-Expect-Failure macro to mark the cases you know are going to fail. We want to be able to distinguish between regressions and other unexpected failures, and cases that have been (partially) analyzed but not yet repaired.

  3. Mark the bug with the date of report. An "Unfixed since yyyy-mm-dd" gloss for Known-Bug-Expect-Failure is planned to further increase developer embarrassment (== incentive to fix the bug), but until then at least put a comment about the date so we can easily see when it was first reported.

  4. It's a matter of your judgement, but you should often use generic tests (e.g., eq) instead of more specific tests (= for numbers) even though you know that arguments "should" be of correct type. That is, if the functions used can return generic objects (typically nil), as well as some more specific type that will be returned on success. We don't want failures of those assertions reported as "other failures" (a wrong-type-arg signal, rather than a null return), we want them reported as "assertion failures."

    One example is a test that tests (= (string-match this that) 0), expecting a successful match. Now suppose string-match is broken such that the match fails. Then it will return nil, and = will signal "wrong-type-argument, number-char-or-marker-p, nil", generating an "other failure" in the report. But this should be reported as an assertion failure (the test failed in a foreseeable way), rather than something else (we don't know what happened because XEmacs is broken in a way that we weren't trying to test!)


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

12.2 Modules for Regression Testing

 
`test-harness.el'
`base64-tests.el'
`byte-compiler-tests.el'
`case-tests.el'
`ccl-tests.el'
`c-tests.el'
`database-tests.el'
`extent-tests.el'
`hash-table-tests.el'
`lisp-tests.el'
`md5-tests.el'
`mule-tests.el'
`regexp-tests.el'
`symbol-tests.el'
`syntax-tests.el'
`tag-tests.el'
`weak-tests.el'

`test-harness.el' defines the macros Assert, Check-Error, Check-Error-Message, and Check-Message. The other files are test files, testing various XEmacs facilities. See section 12. Regression Testing XEmacs.


[ << ] [ >> ]           [Top] [Contents] [Index] [ ? ]

This document was generated by XEmacs Webmaster on August, 3 2012 using texi2html