Friday, December 23, 2022

Best Practices in React Development

Photo by Lautaro Andreani on Unsplash

1.    Folder Structure

Component-centric file structure should be used which implies that all the files related to a different component (like test, CSS, JavaScript, assets, etc.) should be kept under a single folder.

For ex:

Components

  |

  --Login

    |

    --tests--

    --Login.test.js

    --Login.jsx

    --Login.scss

    --LoginAPI.js

 

2.    HOC

It’s an advanced technique in React which allows reusing component logic inside the render method. An advanced level of the component can be used to transform a component into a higher order of the component. For example, we might need to show some components when the user is logged in. To check this, you need to add the same code with each component. Here comes the use of the Higher-Order Component where the logic to check the user is logged in and keep your code under one app component. While the other components are wrapped inside this.

 

3.    Decompose Big Component

Try to decompose large components into small components such that component performs one function as much as possible. It becomes easier to manage, test, reuse and create a new small components.


4.    Use Functional or Class Components based on Requirement

If you need to show User Interface without performing any logic or state change, use functional components in place of class components as functional components are more efficient in this case.

Try to minimize logic in React lifecycle methods like:

componentDidMount();

componentDidUpdate();

These cannot be used with functional components, but can be used with Class components.

While using functional components, you lose control over the render process. It means with a small change in component, the functional component always re-renders.

 

5.    Naming and Destructuring Props

Use meaningful and short names for props of the component. Also, use props destructuring feature of function which discards the need to write props with each property name and can be used as it is.



6.    Use propTypes for Type Check

It is a good practice to do type checking for props passed to a component which can help in preventing bugs. Please refer below code for how to use


 

7.    Naming Conventions

Use PascalCase in components, interfaces, or type aliases. 


Use camelCase for JavaScript data types like variables, arrays, objects, functions



8.    Avoid Unnecessary DIV tags

BAD:



GOOD:


9.    Remove Unnecessary Comments

Add comments only where it’s required so that you do not get confused while changing code at a later time.

Also don’t forget to remove statements like Console.log, debugger, unused commented code.

 

10.The Rule of Max 3 props inline

When there are three or fewer properties, then you should keep those properties in their line inside both the component and the render function.

Less than 3 props:


More than 3 props:


 

11.Use shorthand for Boolean props

BAD:<RegistrationForm hasPadding={true} withError={true} />

GOOD:<RegistrationForm hasPadding withError />

 

12.Avoid curly braces for string props

BAD:<Paragraph variant={"h5"} heading={"A new book"} />

GOOD:<Paragraph variant="h5" heading="A new book" /> 

13.Write a fragment when a DIV is not needed

 

14.Integrate self-closing tags when no children are needed

BAD: <NavigationBar></NavigationBar>

GOOD: <NavigationBar />

 

15.Apply ES6 Spread Function

Using ES6 methods to send an object attribute would be a more straightforward and effective method. All of the object’s props will be automatically inserted if the phrase “…props” is used between the open and close tags.


 

16.Using Map Function for Dynamic Rendering


 

17.Use ES6 Spread Operator



18.Use Ternary Operators

BAD:     



GOOD:


19.Use Object Literals

BAD:



GOOD:

 


20.Don't Define a Function Inside Render

BAD:



GOOD:

 

21.Use Memo

BAD:

Although the child component should render only once because the value of count has nothing  to do with the ChildComponent . But, it renders each time you click on the button.

GOOD:

Now, no matter how many times you click on the button, it will render only when necessary.


22.CSS in JavaScript

BAD:



GOOD:



23.Use Object Destructuring

BAD:



GOOD:



24.Avoid using string concatenation.

BAD: const userDetails = user.name + "'s profession is" + user.proffession

GOOD: const userDetails = `${user.name}'s profession is ${user.proffession}`

 

25.Import in Order

BAD:



GOOD:

The rule of thumb is to keep the import order like this:

  • Built-in
  • External
  • Internal

 

26.Use Implicit return

BAD:



GOOD:



27.Quotes

BAD:


GOOD:

Use double quotes for JSX attributes and single quotes for all other JS.

 


 

 


Monday, December 19, 2022

What is Web3?

Photo by Shubham Dhage on Unsplash

The current internet is largely centralized, with a few large companies and organizations controlling much of the infrastructure and data on the web. This centralization can make it easier for these entities to censor or manipulate information, and it can also make it easier for hackers to access or tamper with data.

Web3 technologies aim to address these issues by creating a decentralized infrastructure for the internet that allows for more secure and transparent interactions. They enable users to own and control their own data, rather than relying on centralized entities like companies or governments to store and manage it for them. This decentralized structure is designed to be more resistant to censorship and tampering, and can provide a more secure and private online experience for users.

Overall, the main difference between the current internet and web3 is the degree of centralization and the level of control that users have over their data and interactions. Web3 technologies aim to create a more decentralized and secure internet, while the current internet is largely centralized and controlled by a few large entities.

Web3, also known as the "Decentralized Web" or "Web of Trust," is an umbrella term that encompasses a wide range of technologies, protocols, and applications that aim to give users more control over their online data and interactions. Web3 is a diverse set of protocols, technologies, and applications that aim to create a more decentralized and secure internet. Some of the main technologies used in web3 include:

  • Blockchain: A decentralized, distributed ledger technology that allows for the secure and transparent storage and transfer of data.
  • Cryptocurrency: A digital currency that uses cryptography for secure financial transactions.
  • Decentralized Autonomous Organizations (DAOs): Organizations that are run using smart contracts and decentralized governance mechanisms.
  • Decentralized Applications (DApps): Applications that run on a decentralized platform, such as a blockchain, and are not controlled by a single entity.
  • InterPlanetary File System (IPFS): A decentralized file storage system that allows for the secure and efficient distribution of large amounts of data.
  • Decentralized Identity (DID): A system that allows users to own and control their own digital identity, rather than relying on a central authority.

These technologies are often used in combination to create decentralized platforms and applications that can provide a more secure and transparent online experience for users. They have the potential to revolutionize a wide range of industries, including finance, healthcare, and education.

Web3 technologies are still in their early stages, and there are many challenges to be overcome before they can be widely adopted. However, they have the potential to revolutionize the way we use the internet and could have a major impact on industries such as finance, healthcare, and education.

For example, one of the main benefits of using a decentralized platform for healthcare is that it can provide a higher level of security and privacy for patients' personal and medical data. With traditional centralized systems, sensitive health information is often stored in a single location, which makes it vulnerable to data breaches and unauthorized access. In contrast, a decentralized platform allows data to be stored in multiple locations, making it more difficult for hackers to access or tamper with.

Decentralized platforms can also make it easier for healthcare providers to share and access patient information, as they can do so without having to go through a central authority. This can improve the efficiency and accuracy of medical care, as doctors and other healthcare providers can quickly and easily access a patient's complete medical history.

Web3 technologies, such as blockchain and decentralized protocols, provide a decentralized infrastructure for the internet that allows for more secure and transparent interactions. They enable users to own and control their own data, rather than relying on centralized entities like companies or governments to store and manage it for them. This decentralized structure is designed to be more resistant to censorship and tampering, and can provide a more secure and private online experience for users.


Tuesday, December 13, 2022

Building Cloud Native Application

 

Image by Gerd Altmann from Pixabay

The Cloud Native term is getting increasingly popular for modern applications which make best use of the cloud, containers and orchestration. There are many more open-source software available for each area like ‘build’, ‘deploy’, ‘monitoring’ etc. You can run an existing application on cloud, but that does not make it cloud native. Cloud native applications are independent services, packaged as self-contained, lightweight containers that are portable and can be scaled quickly based on the demand.

Cloud Native Computing Foundation (CNCF) plays a vital role in fostering open-source technologies to support adoption of cloud native applications and also defining the standards. The CNCF lists immutable infrastructure, microservices, declarative APIs, service meshes, and containers as the technological blocks of cloud-native architecture.

Her are a few characteristics of cloud native applications

Automation

Application needs to use some common standards, interfaces, and formats in order to get it deployed without human intervention. Additionally, developers can use blue-green deployment strategy to make improvements to the application without any disruption to the user experience. Cloud native application are designed to deliver using continuous integration and continuous delivery (CI/CD) pipelines, which is the most important aspect for automating the building, testing, and deployment.

Independent and flexible

The application should be decoupled from the infrastructure resources so that it can be moved across Nodes. By containerizing the microservices, cloud-native applications run independently of the underlying operating system and hardware. Containerized micro services can be moved across nodes and even one cluster to another. This helps in deploying cloud-native applications on premises, on cloud infrastructure, or on hybrid clouds seamlessly.

Resilient and Scalable

Traditional applications usually have a single point of failure, application stops working if the main process crashes or due to any underlying network/ hardware issues. Auto recovery is not built in the traditional applications, human intervention is required to revive the application health. A well-designed cloud native application is able to survive and is serviceable even in the event of an infrastructure outage. Cloud native application can be made highly available as they are distributed by design.

Dynamic

A container orchestrator such as Kubernetes can make optimal usage of the available resources. It can scale up or down on-demand based on application traffic. It can also run multiple copies across multiple availability zones to achieve high availability. 

Distributed

Cloud native is an approach to building and running applications that takes advantage of the distributed and decentralized nature of the cloud. Instead of deploying your code as a monolithic application, cloud native application is based on distributed microservices architecture.

Monitoring

Distributed applications are usually difficult to debug and trace. To gauge the health of the application, it’s important to monitor the logs and metrics generated by the application, and underlying infrastructure. Cloud Native applications should stream the logs to a centralized storage. Monitoring tools can be run on the centralized logs to generate alerts real-time.

Friday, September 16, 2022

Blockchain 101

Photo by GuerrillaBuzz Crypto PR on Unsplash

 

Introduction 

The Blockchain, also called distributed ledger technology (DLT) is an open, distributed database that can record transactions among parties efficiently and in a verifiable and permanent way.  

The primary aim of blockchain technology is to increase the transparency, efficiency, speed and security of a financial transaction as well as lower transaction costs by eliminating the traditional trusted custodian.  

Data within the blockchain are secured by encryption methods. When a transaction takes place on the blockchain, information from the previous block is copied to a new block with the new data, encrypted, and the transaction is verified by validators—called miners—in the network. 

Bitcoin and Ethereum are separate blockchains, together comprising of major share of the crypto market. The information is stored in blocks and connected to form a permanent chain, with each block storing transaction data.  

As the timeframe changes, a new block added to the chain to store the transaction data; each new block is added to the chain with a reference of previous block and hence this is called as chain of blocks (blockchain). This chain of blocks is maintained by a network of independent computers located around the globe who participate in checking and validating each transaction getting recorded on the blockchain.  

 

Generations of Blockchain  

  • 1st Generation (Bitcoin and Digital Currency): Bitcoin Blockchain - The Bitcoin blockchain was introduced to Public in 2009 by an anonymous developer or group of developers using the name Satoshi Nakamoto 

This was introduced as a P2P electronic cash transaction system eliminating all intermediaries. This feature made the bitcoin blockchain a decentralised system. Therefore, bitcoin is also called the first-generation blockchain. 

After every 10 mins, blockchain is verified and a new block is added to the blockchain. There is a limit defined on the number of bitcoins that will be available for circulation, and the limit is set to 21 million bitcoins.  

Miners (Computers on the bitcoin network) are rewarded in bitcoin every time they mine 1 block. But the rewards given to bitcoin miners are halved after the addition of every 210,000 blocks to the blockchain, or roughly every four years. This reduces the rate at which new bitcoins are released into circulation by 50 percent. As fewer bitcoins are released into the system, the value of each bitcoin increases as it becomes scarcer. 

 

  • 2nd Generation (Smart Contract): Ethereum Blockchain – The Ethereum blockchain was launched in 2014, in addition to documenting the transactions (done by 1st generation blockchain), introduced smart contracts which are of self-managing nature where actions are triggered based when events occur - date has passed, a certain value is achieved.  

Ethereum blockchain is much faster compared to 1st generation blockchain, each block of information on the Ethereum blockchain is verified and created every 10-20 seconds. 

Native currency on Ethereum blockchain is ETH. Unlike bitcoin, there is no limit on supply of ETH.  

 

  • 3rd Generation (The Future) Major issue faced by blockchain is scaling, many new digital currencies are attempting to revise their blockchains to resolve this issue. Many new application areas of blockchain are also getting identified and solutions are being implemented.  

 

Though both Bitcoin and Ethereum blockchains work on principal of distributed ledger and cryptography, there are few differences – The transactions on the Ethereum network include executable code - smart contract, while transactions on Bitcoin network generally keep notes only 

Other difference is regarding time to confirm the transaction - transaction on Ethereum is confirmed in seconds, as compared to Bitcoin blockchain which takes minutes to confirm a transaction. The Bitcoin uses SHA-256 algorithm and Ethereum uses Ethash algorithm. 

 

Consensus Mechanism –  

In the context of blockchain, the consensus mechanism is extremely critical to ensure bad actors are identified and prevented from carrying out deliberate fraudulent actions, - Double-Spend scenario is prohibited by the consensus mechanism followed by the blockchain nodes.  

Proof-of-work and Proof-of-stake are the two most widely used consensus mechanisms used by the blockchains to operate securely. Both these algorithms are used to determine which node (computer) in the blockchain network can add the next block of transactions to the chain. 

 

  • Proof of Work (PoW) 

Consider a scenario, Amit is expected to transfer 10 bitcoins to Rahul.  But Amit transfers only 8 bitcoins or Amit actually does not have 10 bitcoins - a fake transaction is being performed or the transaction is being done in incorrect way 

Because there is no bank or central authority to approve the transfer of decentralized currency, some form of verification is needed, and this verification is done by miners.  

They are called miners because in PoW the new coins come in circulation through miners - when a miner verifies a transaction and adds a block in the blockchain they get coins as a reward this way new coins come in to the circulation. This process is called mining and the node performing the operation is called miner. The miner can be anyone who is part of that bitcoin network whose work is to verify the transactions also called transaction validation.  

In PoW, to validate a transaction, miners must solve a complex mathematical puzzle which requires very high computational power. Many miners compete to solve the mathematical puzzle but only one miner wins and gets to add a block in the blockchain, and other miners must compete next time again for some other transaction 

This is the main drawback of PoW - to validate one transaction, so many miners end up spending their computational power, but only one of them wins and computational power spent by others is wasted.  

Because solving of mathematical puzzle requires high computational power, many miners work together to form a mining pool which combines their mining power. This increases their chance of getting block reward and the received reward is distributed amongst all participating miners in the mining pool as per their computational power.  

Analogy to explain the complex mathematical puzzle – It is easy to multiply two 10 digits numbers but much difficult to predict which two numbers can be multiplied together to get 398054854052345.  

Miner must identify the two numbers by performing trials to confirm the solution. Once confirmed, the miner presents the solution to all miners in the network who can easily verify the proposed solution.  

The miner who has solved the puzzle first, is allowed to create a new block and publish to the nodes in the network who will individually perform audit of the existing ledger and the new block. If validation is successful then the new bock is added to the blockchain, and miner is rewarded with new bitcoins as well as transaction fee.  

 

Mining pool helps small miners (with low computational power) to participate in the mining process. Large mining pool tend to get centralized and move away from de-centralization as having large computational power leads to higher chance of mining. PoW is used by many famous crypto like bitcoin, litecoin, dogecoin, etc.  

 

  • Proof of Stake (PoS) 

This consensus mechanism is used to reduce the high energy consumption caused by PoW and to increase scalability. This process includes validators instead of miners.  

Validators perform the same job as done by the miners in the PoW. The key difference is validators do not compete against other validators by spending their computational power, instead, for verifying a transaction there is only one validator selected to verify the transaction.  

Validator can be anyone who is part of the blockchain network and who has coins to stake as a security. The minimum number of coins to be staked is defined by the blockchain network and whichever node can stake the required number of coins will be considered as a validator.  

Using an algorithm, one of the validators is selected randomly to verify the transaction. The one with higher stake has higher probability of being a validator.  

Considering only one validator verifies the transaction, there is a chance of wrong transaction being verified, but in such case all the coins of the validator which are at stake are confiscated and this process is referred as slashing. In case of failure to complete the verification due to technical issues like power cut, etc., even in such case slashing happens and validator does not get back his coins at stake.  

Considering all network nodes do not participate in the verification process, the scalability and speed is enhanced with usage of less computational power, and even a normal computer can perform this validation.  

Like mining pool in PoW, there are staking pools in PoS, where many nodes combiner their coins together and participate as a validator. Main advantage of staking pool is nodes who do not have enough coins to stake can join together and participate as a staking pool by meeting a minimum requirement of becoming a validator. But the disadvantage of staking pool is weakening of de-centralization because node having higher stake has a higher chance of being a validator.  

 

Key differences - 

PoW requires large investment in terms of high-end hardware to generate computational power, whereas PoS requires coins worth minimum staking value to become eligible for being a validator. 

PoS is more de-centralized as compared to PoW due to staking pool as it is comparatively easier to pool coins to stake than investing in high-end hardware to generate computational power. Hence only larger players can afford to participate in PoW 

In PoW, after adding a block, nodes get coins as reward along with transaction fee, in PoS only transaction fee is received by the validator.  

PoW consumes lots of energy whereas PoS requires significantly less amount of energy - about 99% less energy. Due to the same reason, Ethereum is moving from using PoW to PoS as consensus mechanism. Movement of Ethereum from PoW to PoS is referred as Ethereum Merge. This merge is planned to happen around 14-15 Sep 2022 

Though PoW consumes higher amount of energy, this mechanism is well tested over the past few years and adopted by big crypto currencies like bitcoin keeping them secure and decentralized. PoS is relatively in its initial days and less tested as compared to PoW.