Blog

A guide: How to build and publish an NPM Typescript package.
Software
Sep 23, 2022
A guide: How to build and publish an NPM Typescript package.
Introduction Have you ever found yourself copy-pasting the same bits of code between different projects? This was a constant problem for me, so I started developing Typescript packages that allowed me to reuse useful pieces of code. This guide will show you step-by-step how to build a package using typescript and published in the Node Package Manager (npm). Why use typescript?   Using Typescript will give you a better development experience, it will be your best friend when developing, always “yelling” at you every single time you make a mistake. In the beginning, you may feel that strong typing decreases productivity and It's not worth it to use. But believe me when I tell you that Typescript has some serious advantages: Optional Static Typing – Types can be added to variables, functions, properties, etc. This helps the compiler and shows warnings about any potential errors in code before the package ever runs. Types are great when using libraries they let developersknow exactly what type of data is expected. Intellisense – One of the biggest advantages of Typescript is its code completion andIntellisense. Providing active hints as code is added. More robust code and easier to maintain. In my option, Typescript should be your best pal when building packages!Let’s get cooking!                                          The first step is to create your package folder picking a creative name.mkdir npm-package-guide-project && cd npm-package-guide-projectCreate a git repository.Next, let’s create a remote git repository for your package. How to create a git repository is out of the scope of this article but when you create a repository in GitHub it easily shows you how to do it. Follow the steps there then come back over here!Start your packageAfter the repository is created, you need to create a package.json. It’s a JSON file that resides in the project's root directory. The package.json holds important information. It Contains human-readable metadata about the project, like the project name and description, functional metadata like package version number, scripts to run in the CLI, and a list of dependencies required by the project.npm init -yAfter that, we need to create a .gitignore file at the root of the project. We don’t want unneeded code entering the repository ✋. For now, we only need to ignore the node_modules folder.echo "node_modules" >> .gitignoreGreat Job! This is what the project should look like in Visual Studio Code and in the git repository. From this point on I will continue adding files from vscode.Let’s add in Typescript as a DevDependencyIt will use a more stable version of typescript that is compatible with multiple packages that will be used during this guide.npm install --save-dev typescript@4.7Using the flag –-save-dev will tell NPM to install Typescript as a devDependency. This means that Typescript is only installed when you run npm install, but not when the end-user installs the package. Typescript is needed to develop the package, but it’s not needed when using the package.To compile the typescript, we need to create a tsconfig.json file at the root of the project. This file corresponds to the configuration of the typescript compiler(tsc). {  "compilerOptions": {    "outDir": "./lib",    "target": "ES6",    "module": "CommonJS",    "declaration": true,    "noImplicitAny": true  },  "include": ["src"], // which files to compile  "exclude": ["node_modules", "**/__tests__/*"] // which files to skip} There are many configuration options in the fields of tsconfig.json and it's important to be aware of what they do. target: the language used for the compiled output. Compiling to es6 will make our package compatible with browsers. module: the module manager used in the compiled output. declaration: This should be true when building a package. Typescript will then also export types of definitions together with the compiled JavaScript code so the package can be used with both typescript and JavaScript.  outDir: Compiled output will be written to his folder. include: Source files path. In this case src folder. exclude: What we want to exclude from being compiled by the ts Let’s Code! Now with Typescript compilation set up, we are ready to code a simple function that receives parameters and multiplies them, returning the operation result. For this let’s create a src folder in the root and add an index.ts file:  export const Multiplier = (val: number, val2: number) => val * val2; Then add a build script to package.json: "build": "tsc" Now just run the build command in the console:npm run buildThis will compile your Typescript and create a new folder called lib in the root with your compiled code in JavaScript and type definition.It’s needed to add the lib folder to your .gitignore file. Is not recommended for auto-generated files to go to the git remote repository as it can cause unnecessary conflicts. node_modules/lib Formatting and linting A good package should include rules for linting and formatting. This process is important when multiple people are working/contributing to the same project so that everyone is on the same page when it comes to codebase syntax and style. Like we did with Typescript, these are tools used only for the development of the package. They should be added as devDependencies.Let’s start by adding ESLint to our package:npm install eslint @typescript-eslint/parser @typescript-eslint/eslint-plugin --save-dev eslint: ESLint core library @typescript-eslint/parser: a parser that allows ESLint to understand TypeScript code @typescript-eslint/eslint-plugin: plugin with a set of recommended TypeScript rules Similar to Typescript compiler settings, you can either use the command line to generate a configuration file or create it manually in VSCode. Either way, the ESLint configuration file is needed.Create a .eslintrc file in the root:You can use the following starter config and then explore the full list of rules for your ESLint settings. {  "parser": "@typescript-eslint/parser",  "parserOptions": {    "ecmaVersion": 12,    "sourceType": "module"  },  "plugins": ["@typescript-eslint"],  "extends": ["eslint:recommended", "plugin:@typescript-eslint/recommended"],  "rules": {},  "env": {    "browser": true,    "es2021": true  },} parser: this tells EsLint to run the code through a parse when analyzing the code. plugins: define the plugins you’re using extends: tells ESLint what configuration is set to extend from. The order matters. env: which environments your code will run in Now let’s add a lint script to the package.json. Adding an --ext flag will specify which extensions the lint will have in account. By default it’s .js but we will also use .ts. "lint": "eslint --ignore-path .eslintignore --ext .js,.ts ." There is no need for some files to be linted, such as the lib folder. It’s possible to prevent linting on unnecessary files and folders by creating a .eslintignore file. node_moduleslib Now ESLint it’s up and ready! I suggest the integration of ESLint into whatever code editor you prefer to use. In VSCode, go to extensions and install the ESLint extension.To check your code using ESLint you can manually run your script in the command line.npm run lintNow let’s set up PrettierIt’s common the usage of ESLint and Prettier and the same time, so let’s add Prettier to our project:npm install --save-dev prettierPrettier doesn’t need a config file, you can simply run and use it straight away.In case you want to set your own config, you need to create a .prettierrc at the root of your project.If you are curious to know more, I'm leaving here a full list of format options and the Prettier Playground. //.prettierrc{    "semi": false,     "singleQuote": true,     "arrowParens": "avoid" } Let’s add the Prettier command to our scripts. Let’s also support all files that end .ts, .js, and .json, and ignore the same files and directories as .gitignore ( or create a file .prettierignore) ...  "scripts": {    "test": "echo \"Error: no test specified\" && exit 1",    "build": "tsc",    "lint": "eslint --ignore-path .eslintignore --ext .js,.ts .",    "format": "prettier --ignore-path .gitignore --write \"**/*.+(js|ts|json)\""  },... Now just run the command npm run format to format and fix all your code.Conflicts with ESLint and Prettier.It’s possible that Prettier and ESLint generate issues when common rules overlap. The best solution here is to use eslint-config-prettier to disable all ESLint rules that are irrelevant to code formatting, as Prettier is already good at it.npm install --save-dev eslint-config-prettierTo make it work you need to go to the .eslintrc file and add it to Prettier at the end of your extends list to disable any other previous rules from other plugins. // .eslintrc{  "parser": "@typescript-eslint/parser",  "parserOptions": {    "ecmaVersion": 12,    "sourceType": "module",  },  "plugins": ["@typescript-eslint"],  // HERE  "extends": ["eslint:recommended", "plugin:@typescript-eslint/recommended", "prettier"],  "rules": {},  "env": {    "browser": true,    "es2021": true  } With that, the format and linting section are completed! Awesome job!Setup testing with jestIn my opinion, every package should include unit tests! Let’s add Jest to help us with that.Since we are using Typescript, we also need to add ts-jest and @types/jest.npm install --save-dev jest ts-jest @types/jestCreate a file jestconfig.json in the root: //jestconfig.json {"transform": {"^.+\\.(t|j)sx?$": "ts-jest"},"testRegex": "(/__tests__/.*|(\\.|/)(test|spec))\\.(jsx?|tsx?)$","moduleFileExtensions": ["ts", "tsx", "js", "jsx", "json", "node"]} Now let’s update the old test script in our package.json file: //package.json"scripts":{..."test": "jest --config jestconfig.json" ... Your package.json file should look something like this:Let’s write a basic test! In the src folder, add a new folder named __tests__, and inside, add a file with a name you like, but must end with test.ts, for example, multiplier.test.ts. //multiplier.test.tsimport { Multiplier } from '../index'test('Test Multiplier function', () => {  expect(Multiplier(2, 3)).toBe(6)}) In this simple test, we will pass the numbers 2 and 3 as parameters throw our Multiplier function and expect that the result will be 6.Now just run it.npm testIt works! Nice job! The test passes successfully, as you can see! Meaning that our function is multiplying correctly.Packages.json magic scripts.There are many magic scripts available for use by the Node Package manager ecosystem. It’s good to automate our package as much as possible.In this section we will look at some of these scripts in npm: prepare, prepublishOnly, preversion, version, and postversion. prepare script runs when git dependencies are being installed. This script runs after prepublish and before prepublishOnly. Perfect for running building code.           "prepare" : "npm run build" prepublishOnly: this command serves the same purpose as prepublish and prepare but runs only on npm publish!        "prepublishOnly" : "npm test && npm run lint" perversion: will run before bumping a new package version. Perfect to check the code using linters.        "preversion" : "npm run lint" version: run after a new version has been bumped. If your package has a git repository, like in our case, a commit and a new version tag will be made every time you bump a new version. This command will run BEFORE the commit is made.        "version" : "npm run format && git add -A src" postversion: will run after the commit has been made. A perfect place for pushing the commit as well as the tag.        "postversion" : "git push && git push --tags"This is what my package.json looks like after implementing the new scripts:Before PublishWhen we added .gitignore to our project with the objective of not passing build files to our git repository. This time the opposite occurs for the published package, we don’t want source code to be published with the package, only build files.This can be fixed by adding the files property in package.json:  //package.json..."files":[   "lib/**/*"  ] ... Now, only the lib folder will be included in the published package!Final details on package.jsonFinally is time to prepare our package.json before publishing the package: //package.json{ "name": "npm-package-guide", "version": "1.0.0", "description": "A simple multiplier function", "main": "lib/index.js", "types": "lib/index.d.ts", "scripts": { "test": "jest --config jestconfig.json", "build": "tsc", "lint": "eslint --ignore-path .eslintignore --ext .js,.ts .", "format": "prettier --ignore-path .gitignore --write \"**/*.+(js|ts|json)\"", "prepare": "npm run build", "prepublishOnly": "npm test && npm run lint", "preversion": "npm run lint", "version": "npm run format && git add -A src", "postversion": "git push && git push --tags" }, "repository": { "type": "git", "url": "git+https://github.com/Rutraz/npm-package-guide.git" }, "keywords": [ "npm", "jest", "typescript" ], "author": "João Santos", "license": "ISC", ... These final touches to the package.json include adding a nice description, keywords, and an author. It’s important here since it will tell NPM where it can import the modules from.Commit and push your code.The time is here, to push all your work to your remote repository!git add -A && git commit -m "First commit"git pushPublish your package to NPM!To be able to publish your package, you need to create an NPM account.If you don’t have an account you can do so at https://www.npmjs.com/signupRun npm login to login into your NPM account.Then all you need to do is to publish your package with the command:npm publishIf all went smoothly now you can view your package at https://www.npmjs.com/.We got a package! Awesome work!Increase to a new version.Let’s increase the version of our package using our scripts:npm version patchIt will create a new tag in git and push it to our remote repository. Now just published again:npm publishWith that, you have a new version of your package! Congratulations! You made it to the end. Hopefully, you now know how to start building your awesome package!
João
João Santos
Mobile Developer Lead
Hey, BANKS! Do you really need Another Core Banking System?
Software
Oct 12, 2019
Hey, BANKS! Do you really need Another Core Banking System?
Our minds are formatted to assume without questioning, that technological systems that involve complex and critical operations are something always tightly crafted in a secret lab and managed by pseudo-scientists from an 80s Hollywood movie.  "complexity is the enemy of execution"Currently, in our society as a result of this common vision, in the most traditional business areas is very uncommon to ask "why do we need such complexity?".After several years working on complex software architectures for financial institutions, and since 2017 as Fintech and Software Company Founder, I truly believe that all traditional Banks are the best example of the famous quote from Tony Robins about complexity "complexity is the enemy of execution". The main reason for this conclusion has been identified and is well known to all financial services professionals, we are talking about the dependency on legacy systems.My goal when I decide to write this article was not to talk about the problems of obsolete, slow or insecure legacy systems, my goal was to share my disturbing conclusion that those responsible for the technological evolution strategy of these banks are repeating the same old mistakes when trying to remove from queue equation their actual legacy systems.dependence on a single component usually coming from the 80s and 90s that we call the Core Banking System.What is causing most of these organizations to be doomed to disappear in a shorter period of time than we can ever imagine? The answer is their dependence on a single component usually coming from the 80s and 90s that we call the Core Banking System.Most of the Core Banking Systems were developed to offer all the functionalities needed to run the Bank. And WOW... this was great for those who want to have a bank up and running quickly! In the 80s and early 90s, there was no internet for everyone, no Fintech companies disrupting the business, and most of the banking products were similar between banks.The real problems of having an old Core Banking System start when the business processes of your organization slow down every time you need small changes on the system, if you need to support channel solutions in the Digital Banking race with other banks when regulatory changes are mandatory or very specific customisation is urgent to support a new portfolio product.The Music Lover vs The Music Professional AnalogyRecently I found the perfect analogy for all this Core Banking modernization process. The music lover vs the music professional.Let me explain my analogy. If you like to listen to music at home and you don't need sound equipment to make a living, buy a hi-fi or just use your parent's hi-fi from the 90s. If you need to fix something on the old hi-fi will be hard to find a non-expensive good technician, some of the parts can also be hard to find, but the point is that you are not professional, and you like to listen to some chill music at the weekend, this old hi-fi makes what you need! This is what most of the traditional Banks that are not investing in innovation are doing, they are simply listening to some chill music and unconsciously waiting for their end. But if you're a professional musician or a DJ, you will have competition, and you need to keep your system updated, if possible a state-of-the-art system, so you can compete in the market with other musicians, in this case, you will not buy a "domestic" hi-fi, you will buy several separated professional sound hardware modules and very quality wires, all of them with standard interfaces so you can quickly upgrade or simply change parameters of sound to better respond and adapt to the space where you are playing your music. This type of system is hard to assemble, but once they are ready they are so much easy to adapt to any situation.The picture on the left is in my opinion the perfect analogy to represent a robust and very customisable integration layer of a modern Bank, where we can connect dozens or hundreds of external modules, and orchestrate and fine-tune all the final results.When a Bank with this modern view of Systems Architecture acquire or develops a new module to integrate into its Microservices Ecosystem, they already know that it will be easy to replace it in the near future, because it was built with integration and domain isolation in mind, and that is the normal life cycle of technological modules on the super-fast current Financial industry.In the end, it's all about being a professional musician or just choosing to be someone that like music and has a hi-fi box, after all, is all about end result.Why are Banks replacing old monolithic systems with new monolithic systems?One of the most difficult questions for me to answer today is why banks are replacing old monolithic systems with new monolithic systems? Even though the new systems are built using more modern tools, using less proprietary hardware architectures or even running on the cloud.Just because a vendor shows some example integration services like web services APIs, and the product catalogue you found separated modules, doesn't mean that you are not buying another monolith!In the latest 8 years, I found it more and more common to see banks replacing their Core Banking System with a new modern Core Banking System, the reason is that most of the Banks don't have on their board a real engineer or they have engineers with a short vision of systems architecture or in most of the cases the decisions are made without the feedback of the engineering team.For decision-makers with a huge ego, and afraid of technology evolutions, the most secure choice when comes to making big investments in technology transformation, is always the buy expensive and proprietary systems from big-name vendors. Normally as a result of one or more reports made for them by one of the big five consulting companies, that have a huge commercial interest in maintaining proprietary, closed and especially hard do adapt systems, so they keep selling obscene expensive contracts of development, analysis, consulting, ...Decision makers instead should focus on the real problem which is the lack of freedom to manage and rule their technological path, which is the base of the fast pace and low maintenance cost of almost all Fintech Companies.The soul of the monolith lives on that dark side where the vendor interconnects on the same code base all the business rules of different domain areas.The soul of the monolith systems lives on that dark side where the system vendor interconnects on the same code base all the business rules of different domain areas, like the "big spaghetti ball from hell". Following this approach, vendors can say that the new Core Banking System offers thousands of integrations services (web service, queues, ....), but all old problems will remain. They are just selling a new Monolith supported by modern programming languages, installed on standard hardware infrastructures or the cloud.The secret to success in this modernization process is to free your organization from the dependency on one specific vendor, where you will always need them or expensive consultancy companies that work with that product to make, analyse or do all types of tasks so your organization run their systems.Microservices Architecture is the path for Modernization and speed all business changes and evolution inside a Bank.All systems inside the Bank should respect industry input and output data structure standards and isolate small business or integration domain functions on separate software modules, the Microservices Architecture is the path for Modernization and also the basis for all business change and evolution processes inside a Bank.My advice to decision-makers working in the Financial Industry is to understand that the most important asset inside your organization is Data, and they should not depend on one specific group of providers to manage and leverage the power of your Data, so the organization can keep generating business and market opportunities.Systems Architecture is the base of their organization's flexibility to adapt quickly to market needsThose decision-makers who don't realise yet, that the investment in technology doesn't have to be the investment in specific vendors with big names, and also that Systems Architecture is the basis of their organization's flexibility to adapt quickly to market needs, will be responsible for the loss of customers and in many cases for the end of the organization. In these cases, the "safe" bet on big-name sellers will be worth nothing in their defence.
Pedro
Pedro Camacho
CEO & Co-founder
Azure Active Directory — Authentication OAuth 2.0
Software
Jan 08, 2018
Azure Active Directory — Authentication OAuth 2.0
I’ve been working in the last weeks on an integration service for a complex system based on Azure. I was trying to find a way to authenticate in the Azure Directory, basically getting the access token for the future requests to the system without the login popup window from Microsoft, very usual to see in similar cases, like integrations with Facebook or other services.Microsoft Azure Active Directory (AD) has already an authentication library (ADAL), but unfortunately nothing for the language I was using at the moment, GoLang. Faced with this situation, I was forced to find a solution.                                                                                      . . .  OAuth 2.0OAuth 2 is a protocol for authorization that enables applications to obtain limited access to the users' accounts on an HTTP service.I will not explain here all the protocols, you can check it here, just the authorization grant types.OAuth 2 has four grant types. Password; Client credentials; Implicit; Authorization Code; With this information and to solve my problem I choose the Password Grant. For similar scenarios, when you have trusted first-party or third-party clients both on web and in native applications this offers to the final user the best experience. For more information about OAuth2.0 you can read here.                                                                                                  . . . Microsoft Azure Active Directory and OAuth 2At this point I start to look at how to use this Password grant type in Azure AD and the documentation from Microsoft it is not useful. They only focus on the others grant flows used in different scenarios, for example: Authorization Code for Web Server application Implicit Grant for native application Client Credentials for Service application But Resource Owner Password Credentials Grant type is also supported since version 1.1 in Azure AD.This is also based on http request but without URL redirection, for more information about this flow you can read here.So for this specific case, when we have an integration service, ex. a windows service, to get information from a trust target application, this is the best option.                                                                                          . . .How to useTo use this method to get the token in Azure AD OAuth 2, we need to use the following web service request:https://login.microsoftonline.com/<TenantId>/oauth2/token Content-Type: application/x-www-form-urlencoded Host: login.microsoftonline.com TenantId: <MY_HOST> (for example “mywebsite.com”) WS: /oauth2/token Parameters to use in Body request: grant_type: password client_id: The Client Id value from Azure AD resource: The app id value of the application you want an access token to client_secret: The Client Secret value from Azure AD username: The user name of a user account in the Azure AD instance password: The password of the user account Request result: HTTP/1.1 200 OKContent-Type: application/json; charset=utf-8{ "token_type":"Bearer", "expires_in":"600", "expires_on":"1511533698", "not_before":"1511533698", "resource":"*resource*", "access_token":"*token*","refresh_token":"*token*", "scope":"user_impersonation"} Finally you have your token to use in your application. Result access token exampleHave you ever had this need for something similar, have another approach? Please let me know.I hope this information will be useful for any future development.@medium
João
João Marçal
Digital Products Development Manager

2 / 2