0% found this document useful (0 votes)
19 views

MSSP Machine-Learning

This document provides guidelines for submitting machine learning papers to the journal MSSP. It explains that papers are often rejected for not contributing new engineering knowledge or using rigorous methods. Acceptable papers would introduce a new application, experimental data set, feature, or classifier that outperforms existing methods when benchmarked and compared rigorously. The guidelines aim to reject non-rigorous papers quickly to save reviewer time and provide clarity for authors.

Uploaded by

Sriram Sundar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

MSSP Machine-Learning

This document provides guidelines for submitting machine learning papers to the journal MSSP. It explains that papers are often rejected for not contributing new engineering knowledge or using rigorous methods. Acceptable papers would introduce a new application, experimental data set, feature, or classifier that outperforms existing methods when benchmarked and compared rigorously. The guidelines aim to reject non-rigorous papers quickly to save reviewer time and provide clarity for authors.

Uploaded by

Sriram Sundar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Guidelines for Machine Learning Papers in MSSP

In the last few years, MSSP has been receiving a large number of papers relating to
machine learning or ‘soft computing’ applied in a mechanical systems context. Many
of these papers are rejected without review as they do not conform to the standards
required of an MSSP paper. This note is intended to explain what is necessary for a
paper on machine learning to be substantial and original enough for archival
publication in MSSP.

The problem most often encountered is that the papers do not contribute to
engineering knowledge. It is currently quite common for a paper to be submitted
which is based on the analysis of a simple experimental rig (a rotor-bearing rig for
example) using feature extraction and classification. It is not sufficient motivation for
a new publication to base the paper on a ‘new’ feature or a ‘new’ classifier,
particularly if the word ‘new’ only means the algorithm or feature has not been
applied in the precise context before. Papers which appear to be based on this
approach will be rejected without review.

There are a number of acceptable bases for publication:

1. A new application context is proposed or a new experimental data set is


demonstrated. This presupposes that the data set has some new aspect not
addressed by existing data e.g. the Case Western Reserve benchmark data
for condition monitoring. Even with new data, the subsequent analysis should
be principled and conform to best practice in the machine learning community.
The most common reasons for rejection on the latter basis are that algorithm
hyperparameters are not determined in a principled manner and that
validation of the algorithm is not rigorous. The most common reason for
validation to be questioned is that there are not enough training data and
therefore generalisation is not confirmed.
2. A new feature for analysis is proposed that appears to be generally applicable
and is superior to existing features in some clear respects. In order for such a
paper to be published, the feature should be benchmarked on more than one
data set and at least one data set should usually be experimental. An
exception may be made in the latter case if simulated data is designed to
include all the anticipated complexities of experimental data; however, this will
need to be verified within the paper. Performance of the new feature must be
established by comparison with existing features that are considered state of
the art; papers will be rejected without review if there is no such comparison
or if classifiers used with the old features are not demonstrably optimised.
3. A new classifier or regressor is proposed that appears to be generally
applicable and is superior to existing classifiers in some clear respects. For
such a paper to be published, the new algorithm should be benchmarked on
more than one data set and again, at least one data set should usually be
experimental. The new algorithm should be compared in a principled manner
with an existing state of the art algorithm. By ‘principled’ this means that all
algorithm hyperparameters should be optimised as far as possible. A common
cause for rejection of a paper is that a new classifier, clearly tuned to the
problem shown, is compared to an existing algorithm where hyperparameters
are set in some suboptimal way so that fairness of the comparison is not
evident.

The basic principle to adhere to is that, if a paper based on machine learning is not
principled enough to appear as an applications paper in a machine learning journal,
it is not principled enough for publication in MSSP.

The reasons for setting ground rules in this manner is to save the efforts of referees;
if it is clear to a handling editor that a paper fails in rigour or novelty in one of the
respects discussed above, the paper will be rejected without review. Although this
may seem harsh, it is also intended to benefit authors, as works which do not satisfy
the conditions for publication will not sit in the review system for long periods before
ultimate rejection.

You might also like