22 Comments

  1. Ajay

    This looks very interesting. Would be good if it also checks against WPCS

    Report

    • Luke Carbis

      It does! The coding standards Tide uses are WordPress-Core, WordPress-Docs, and WordPress-Extra from the PHPCS WPCS project.

      Report

    • Rheinard Korf

      @Ajay, the source for the sniffs is indeed WPCS.

      Currently its using predominantly the wordpress-core sniffs from the project and also relevant sniffs from wordpress-vip that pertains to performance. The team carefully reviewed the selected sniffs and categorised it into “security”, “performance” and “standards”. So some sniffs have a higher weighting than others.

      As mentioned in the article, the weighting document will be made available so that the community can decide what it should be. Our’s is just first draft.

      The WPCS project is really the source of inspiration for Tide.

      Report

    • Derek Herman

      WPCS is exactly what Tide is testing code against, plus PHP Compatibility for themes/plugins and Lighthouse audits for themes.

      Report

  2. Norris

    This is amazing!
    I think I’ll be first in line to test drive this as soon as it becomes available!

    I’ve been playing around with grading themes at Themetally with Lighthouse (without deploying anything to production yet), but this is going to take it to a whole new level. I can’t wait! 🤓

    Report

  3. AJ

    I have to concur that this is an interesting concept. I’ve already tweeted to you Sarah, but will see what others think, should the stars be implemented based on the screenshot….instead of grey stars, using green stars instead. I think people would react better with that colour.

    Report

  4. Rhys Clay

    I think this would be a good outcome for everyone – plugin developers and consumers.

    Report

  5. mark k.

    WPCS is nice, but very far from being flawwless. For some things you need to uglify your code in order to reduce the noise coming from it, which is not a great thing. If for example you want your code to be easily testable by having functions returning html instead of outputing it, good luck with reducing the amount of “non escaped output” warnings.

    Coding standards should be a connivance for the developer, helping him to read his old code and search for anything in it, it is not by itself a good measure of the quality of the code, you can write a totally insecure code that obey all coding standards.

    While it might be a good thing to nudge wordpress developers into using some mostly common coding standards, I am afraid this information will just create a false sense of security with users, and there will obviously be false warnings as well. If this information will have an influence on search results, or any thing else, all you will end up with will be a better and more maintainable malware code.

    I assume that applying such tool on the repository will force Otto to kick his “php in widgets” plugin out of it, which lol is unlikely to happen, so we will end up again with the tired old discussion about arbitrary decisions in the repository :(

    Report

    • Rheinard Korf

      I agree. But the purpose of Tide is to check against the standards of the ecosystem we work with. We deliberately chose not to compromise on some sniffs that I personally find “silly” to put it mildly, but nevertheless. It was determined by the community and is shaped by the community.

      Once Tide launches the best we all could do to improve the service is to contribute to the WPCS project. Help shape the PHP sniffs, help out with the ESLint projects @netweb is working on, etc.

      Report

      • mark k.

        The problem is not with the tool, the problem is with how it is being marketed (at least here). Since it will be very hard to score a 10 for any non trivial plugin (any plugin/theme which generates inline CSS or JS for example), malwares be will able to safely score a 9 to be considered as “perfect”, but it takes only one soft spot in the code that can be abused to make it a horrible plugin. Same goes for performance.

        Security and performance have to be audited by humans that can understand the context and execution path of the code. It is nice to have helper tool, but if it was possible to trust such tools, we would have heard of such tools in other contexts as well. wordpress.com VIP for example do a manual inspection and do not trust the tool “as is”.

        Targeting such tools, and their results, toward users which do not have the knowledge about the possible pitifuls, is just very wrong.

        Report

    • Otto

      I assume that applying such tool on the repository will force Otto to kick his “php in widgets” plugin out of it,

      Nope. It serves a purpose, but I still tell people how to not use it. Perfectly fine with this idea.

      Report

  6. Daniel Powney

    I like the idea of checking coding standards for plugins, but some developers and organisations may choose to create their own. And I would not always want to trust the tool. Perhaps this measure should only be visible to the plugin developers.

    Report

  7. Andreas Nurbo

    Coding standards are not the same as performance or security. Also WP default standard score lower on readability tests so theres that also. Make your code harder to read, go the WP way. Not sure what can be done about that now though.

    Report

  8. Jeroen Sormani

    Looks like an interesting project!
    Not sure yet how I think about star ratings as output, it would depend on how accurate it is and what the exact factors are (I don’t think you should get half a star less because someone were to use spaces instead of tabs).

    Something to consider too, a textual output in lines of:
    – With risk (red, I try not to say something too negative like ‘bad’)
    – Unconventional (yellow/orang~y)
    – Conventional (green)

    @Derek Herman, @Rheinard Korf, @Luke Carbis might be interesting for for us to chat to see if http://codeoversight.com/ can bring in any value here. Feel free to reach out to me if you think its interesting to chat.

    Report

  9. Pete

    Has some merit, especially the compatibility checking (as it is black and white). Performance is also probably a good metric. And security audits will be useful.

    But there are so many other factors that will be ignored or are subjective. Do this scoring could be counterproductive. It is important to remember a well coded plugin is not the same as a good plugin.

    Report

    • Rheinard Korf

      Totally agree. Well written code is not the same as good code, but its a start right? Helps developers not leave gaping holes.

      On the other hand, because Tide is an API and will be open sourced, it leaves room for different kinds of audits to be run on code that we haven’t even thought of. My personal vision is that Tide would eventually allow others to build their own plugins that use the API to surface specific metrics as determined by that plugin. Could even make a subjective review plugin if you feel that way inclined.

      Report

  10. Justin Tadlock

    The code sniffer is a good tool for developers in helping them write better code. It’s not meant for users.

    After using the PHPCS WPCS project for months to review code, I can say with absolute certainty that there are far too many false-positives given. That’s not a problem in terms of using the tool. It simply means, “Hey, you need to check that this is OK; it may or may not be an issue.” It doesn’t necessarily mean, “Hey, this is broken.”

    The code sniffer results should never be used as a metric for end users. Certainly not based on a 5-star rating system. It’d be far too easy to write a quality, secure 1-star plugin that gets matched up against an insecure 5-star plugin.

    I can’t speak to the other two scans mentioned. I’m only familiar with PHPCS WPCS.

    Report

    • Rheinard Korf

      Thats why we are releasing the scoring metrics for contribution. Not all sniffs are equal… some perhaps should only be run once… “So you did this thing, we recommend you fix it, but we’re not gonna sting you for each time we see it in the code”.

      We do not wan’t sniffs like “yoda” conditions for example to give a really good plugin a terrible score.

      Also, the star rating is a concept from the design team. It could be badges, it could be broken down into categories. We’re asking for feedback, so this is helpful.

      Report

      • Rick Gregory

        I don’t think this addresses the real issue – how to relate any code sniffing effort to plugin quality for a non-technical audience.

        Presuming that plugins with serious security issues or with malware aren’t on the repo, then we’re left with things that are going to be hard to express in a summary rating whether that’s stars, a 1-10 score or whatever. Can we reliably judge, say, the performance impact of a plugin? What if that impact is fine on low traffic sites but starts to compromise a site that has significant traffic? What if the code is high quality but it puts obnoxiously colored promos on its settings page?

        I just don’t see a way to sniff most code issues and relate them to things that non-technical WP users care about which is mostly 1) does this do what I need, 2) does it have any security problems, 3) does it slow down my site?

        Report

  11. WP Compass

    I like and share your vision of a performant, secure and reliable “open web”. Thus every new tool/method that helps to get closer to this goal and to stick to best practices is a benefit for the WordPress community.

    However, I’m not sure if showing the “tide score” publicly in the item description should be the way to go (false positives, etc.).

    It might be a useful tool to help plugin and theme authors to enhance their code before submission. An important part will be to provide detailed information about spotted “problems” and maybe even possible solutions.

    Report

    • fwolf

      Personally I think its kind of a wasted effort. The harsh nasty code mostly does not sit inside of the public WP plugin repository (although there are certainly quite a lot examples of “code crime on humankind”), but in major commercial marketplaces like ThemeFactory.

      Getting the management of THOSE places on-board would be of much more use; it would give us, ie. the developers and designers who work with WP on a daily basis, a much better tool, an indicator of how crappy or tidy a code base of a specific theme or plugin ACTUALLY is. And thus saving hundreds of thousands of hours which would normally go into building workarounds for ugly code issues, short-sighted closing of APIs, and so on.

      cu, w0lf.

      Report

  12. Matt

    This is a fantastic initiative. It is needed and will get better with more tools added.

    I think having unit testing and code coverage included over time would be a huge bonus. Anyone doing continuous integration on a plugin will generally have a much more robust product.

    HUGE applause.

    Report

Comments are closed.

%d bloggers like this: