New function — Capability to check outcomes of any two code evaluation scans

ShiftLeft Subsequent Era Static Code Evaluation now means that you can examine any two variations of your code scans. Through the use of the examine scans & tendencies function, it’s simple to find out what has modified between two variations.

New feature-Ability to compare any two scans for code analysisPhotograph by Chris Ried on Unsplash

Why it issues?

Builders are all the time writing code, implementing fixes with a selected context. Their code shouldn’t break present unit exams, integration exams and shouldn’t introduce new bugs.

They count on the identical from their code evaluation instruments — Inform in the event that they have

  1. Launched a brand new vulnerability [NEW]
  2. Reintroduced an already fastened vulnerability [REGRESSION]

and if they will have an inventory of all vulnerabilities that had been present in a previous construct as an consequence of their present work [FIXED]

With ShiftLeft NG-SAST’s newest “examine scans” function achieves all of those use circumstances and far extra.

How the function works?

Prerequisite: The chosen software ought to have a couple of scans.

For any software with a number of scans, the appliance abstract view will default to the latest model. Within the new “Findings Traits” part, it is possible for you to to pick out one other model to check it to.

Observe: You possibly can solely examine a model to a earlier model. Solely variations carried out earlier than the goal model will probably be displayed.

The “Findings Traits” part will provide you with a abstract of the variations between the variations. You’ll see:

New feature-Ability to compare any two scans for code analysisChoose a comparability model from the dropdown

  • Whole Findings — The whole variety of new, widespread, and regression findings within the present model.
  • New — Findings which are within the present model, however not within the chosen comparability model.
  • Frequent — Findings which are in each the present model and chosen comparability model.
  • Regressions — Findings which are within the present model that had been fastened in a model previous to the chosen comparability model.
  • Mounted — Findings that aren’t within the present model that had been current within the chosen comparability model

Observe: Findings embody vulnerabilities, insights, and secrets and techniques.

Evaluate Scans View

Clicking on a abstract whole — or clicking on the “Evaluate Scans” icon within the sidebar, will open up an in depth view of the findings. From right here you’ll be able to swap the present and chosen comparability variations and carry out the identical duties you might be accustomed to within the different views: View Particulars, Assign, Repair, Ignore.

New feature-Ability to compare any two scans for code analysisCarry out actions on the comparability outcomes

API Replace

This launch additionally updates the ShiftLeft API to permit API customers to move in supply and diff question parameters to the Record App Findings endpoint.

  • supply — the model to check the present model to (e.g. scan.3)
  • diff — the discovering standing to return (e.g. fastened, new, and many others.)

ShiftLeft is thrilled to supply builders and organizations with one other nice function to assist with software safety! For a free trial of ShiftLeft Subsequent Era Code Evaluation, click on right here

New feature-Ability to compare any two scans for code analysis


New function — Capability to check any two code evaluation scans was initially revealed in ShiftLeft Weblog on Medium, the place individuals are persevering with the dialog by highlighting and responding to this story.

*** This can be a Safety Bloggers Community syndicated weblog from ShiftLeft Weblog – Medium authored by Vincent Falcone. Learn the unique submit at: https://weblog.shiftleft.io/new-feature-ability-to-compare-any-two-code-analysis-scans-58f3a1e613ac?supply=rss—-86a4f941c7da—4

static code analysis tools java,static code analysis tools python,coverity benefits,checkmarx g2,static code analysis tools sonarqube,static code analysis tools gartner,dast tools,owasp sonarqube,code analysis tools c#,interactive application security testing,checkmarx vs veracode vs fortify,checkmarx vs codescan,checkmarx vs fortify,codescene vs sonarqube,sonarqube vs veracode,you can test dast testing using checkmarx.,veracode vs,veracode sonarqube comparison,sonarqube vs veracode vs fortify,contrast security vs sonarqube,veracode vs blackduck,veracode vs checkmarx,dynamic code analysis tools,how to do dynamic code analysis,dynamic code analysis sonarqube,c# dynamic code analysis,open source dynamic code analysis tools,static analysis vs dynamic analysis malware,web application scanner success criteria,projects webappsec,wasc web security attack classification,static code analysis,static code analysis tools c#,source code analysis tools,static code analysis tools c++,sonarqube,code quality analysis