quarta-feira, 31 de maio de 2017

How to remove unTabs extension from Google Chrome (Removal Guide) How to remove unTabs extension from Google Chrome (Removal Guide)


Reset the Group Policy to the default settings to remove unTabs extension


1- Open the Command Prompt by pressing “Windows Key + X”, and click on “Command Prompt (Admin)” to open it in Administrator mode.


You can easily open the command prompt by typing “cmd” into the search box (Win + S), then right-click and choose Run as Administrator

2- In the Command prompt type (or copy/paste) the following commands:

  1. Type:
    rd /S /Q "%WinDir%\System32\GroupPolicyUsers"
    Press Enter.
  2. Type:
    rd /S /Q "%WinDir%\System32\GroupPolicy"
    Press Enter.
  3. Type:
    gpupdate /force
    Press Enter.

You should see the following notifications after the commands been run:
User Policy update has completed successfully.
Computer Policy update has completed successfully

3- 
  1. The “Installed by enterprise policy” permissions will now be removed from Chrome, and you should be able to remove the unTabs extension.



sábado, 22 de abril de 2017

4G: o que é LTE?

quarta geração (4G) da telefonia móvel tem início com a tecnologia LTE, sigla para Long Term Evolution (algo como "Evolução de Longo Prazo"). Trata-se de mais uma proposta apresentada pela 3GPP. Apesar de, na visão da ITU (International Telecommunication Union), entidade ligada à Organização das Nações Unidas, o LTE não cumprir com todas as exigências técnicas necessárias para ser considerada um padrão 4G, comercialmente, a tecnologia é aceita como tal.
Assim como a tecnologia HSPA+, o padrão LTE chama a atenção pelas velocidades com as quais pode trabalhar: dependendo da combinação de recursos implementados na rede e do aparelho do usuário, pode-se chegar a taxas de 300 Mb/s para download e 75 Mb/s para upload.
Para facilitar a assimilação do aspecto de velocidade, o nível de compatibilidade de aparelhos com o LTE é determinado em categorias:
  • Categoria 1: download de até 10 Mb/s; upload de até 5 Mb/s;
  • Categoria 2: download de até 50 Mb/s; upload de até 25 Mb/s;
  • Categoria 3: download de até 100 Mb/s; upload de até 50 Mb/s;
  • Categoria 4: download de até 150 Mb/s; upload de até 50 Mb/s;
  • Categoria 5: download de até 300 Mb/s; upload de até 75 Mb/s.
É claro que estas velocidades dificilmente são alcançadas em sua totalidade, mesmo porque há uma série de fatores que determinam as taxas que uma rede LTE pode atingir. A quantidade de antenas em uso de maneira simultânea é uma delas - sim, tal como o HSPA+, a tecnologia LTE também pode utilizar as técnicas MIMO.
Outro fator importante é a frequência do canal, que pode ser de 1,4 MHz, 3,5 MHz, 15 MHz ou 20 MHz. Teoricamente, quanto maior a frequência disponível, maior é a taxa de transferência de dados.
O LTE também se diferencia pela forma de acesso. Enquanto as tecnologias UMTS e HSPA são baseadas no padrão W-CDMA, o LTE utiliza as especificações OFDMA (Orthogonal Frequency Division Multiple Access - algo como "Acesso Múltiplo por Divisão Ortogonal da Frequência"), que distribui as informações da transmissões entre diversos subconjuntos paralelos de portadoras, sendo este outro aspecto que favorece velocidades maiores para o downlink (download).
Em relação ao uplink (upload), o esquema utilizado é o SC-FDMA (Single Carrier Frequency Division Multiple Access - algo como "FDMA de Portadora Única"), que é uma especificação semelhante ao OFDMA, mas que consegue reduzir o consumo de potência, fazendo com que o uso de energia por parte dos dispositivos conectados também diminua. Apesar do nome, o SC-FDMA também pode utilizar subconjuntos de portadoras.
Embora o LTE se apresente como um padrão bastante avançado, já há trabalhos em prol de uma versão melhorada, o LTE Advanced, esta sim totalmente compatível com os requisitos da ITU para uma tecnologia 4G. A expectativa é a de que esta variação possa oferecer taxas de até 1 Gb/s (gigabit por segundo) para download e 500 Mb/s para upload.
O LTE pode funcionar com várias faixas de frequência. No Brasil, por exemplo, a tecnologia, quando estiver em funcionamento, deverá trabalhar com a faixa de 2,5 GHz.

Finalizando


Se você leu este texto do início ao fim (partes 1 e 2), pode ter se espantado com a quantidade de tecnologias relacionadas à telefonia móvel. Trata-se de um mercado que envolve o interesse de diversas empresas e governos e que, por consequência, evolui rapidamente, o que pode justificar tamanha complexidade.
Apesar de tantas siglas e denominações técnicas, agora você poderá compreender melhor o que as operadoras oferecem e, assim, encontrar um plano que seja mais adequado às suas necessidades e expectativas, por exemplo.
Seu dispositivo móvel também oferece meios para te ajudar a entender como está funcionando a rede de telefonia celular que você utiliza no momento: seu aparelho pode, por exemplo, exibir um símbolo com a letra 'G' para informar que está utilizando GPRS, 'E' para EDGE, '3G' para W-CDMA, 'H' para HSPA e assim por diante (consulte o manual de seu aparelho para mais detalhes).


Rede GPRS, EDGE, WCDMA, HSDPA e HSUPA: Saiba a Diferença

Há algum tempo que eu venho notando que ao acessar a internet no meu celular, aparece uma letra diferente ao lado do ícone da transmissão de dados. Você sabe a diferença entre essas letras? Não?
Então vamos esclarecer que letras são essas e porque o celular cisma de mostrar letras diferentes algumas vezes em cada conexão com a internet móvel das operadoras.

A letra G, significa que o celular ou tablet está conectado na rede GPRS. Essa rede é a mais lenta de todas as conexões de internet móvel existentes atualmente. Ela é considerada uma internet 2.5G e oferece uma taxa muito baixa de download entre 32 e 80 kbits e upload entre 8 e 20 kbits. Isso quer dizer que em kpbs, a conexão de download fica entre 4 e 10 kbps e upload entre 1 e 2,5 kbps. Na prática, é uma conexão muito ruim pra qualquer coisa por conta do ping (latência) ser extremamente alto.

A letra E, significa que o celular ou tablet está conectado na rede EDGE. Essa rede é um pouquinho melhor do que a rede GPRS e oferece uma conexão com a internet com uma taxa de download entre 35 e 237 kbits e upload entre 9 e 59 kbits. Em kbps, isso equivale a uma velocidade de download entre 4 e 29 kbps e upload entre 1 e 7 kbps. Essa rede é considerada uma 2.75G. Ainda não é considerada um a rede 3G e por isso tem um ping (latência) alto o que deixa a conexão com a internet muito lenta principalmente se o sinal não estiver forte.

A letra H significa que o celular ou tablet está de fato conectado na rede 3G. Só que dentro da rede 3G, ainda temos a divisão entre a rede WCDMA, HSDPA e HSUPA. O aparelho pode estar conectado em uma dessas três redes 3G.

WCDMA, é o modo mais básico da rede 3G. A velocidade de download e upload é parecida com a da rede EDGE, mas com a diferença que o ping é muito menor o que resulta em uma conexão bem mais rápida e estável o que permite de fato fazer tarefas simples como navegar para ver sites, email, o facebook, etc.

HSDPA é o modo um pouco mais avançado da rede 3G que oferece uma taxa de download bem mais alta em relação ao modo WCDMA e um ping ainda mais baixo. Isso garante uma conexão com a internet bem melhor onde é possível fazer tarefas mais avançadas como baixar programas, ver vídeos no youtube, falar com alguém pela conexão Voip, fazer video chamada, etc

HSUPA é a evolução do modo HSDPA onde a taxa de download é ainda mais alta e a taxa de upload também é mais alta o que resulta em uma conexão boa com a internet onde é possível enviar vídeos para o youtube, fazer diversas atualizações nas redes sociais, usar o skype para falar por voz e vídeos, etc.

Ainda temos a rede 3G+ onde a tecnologia 3G foi melhorada para oferecer taxas de download e upload ainda maiores e um ping ainda menor para fazer diversas atividades online sem a conexão ficar lenta.

E por último temos a rede 4G e 4G+ que é o melhor que existe atualmente em internet móvel tanto para download e upload. Na rede 4G, a velocidade de download pode ultrapassar os 10 megabits por segundo e upload chegar a 5 megabits por segundo. É uma conexão excelente para qualquer atividade online, só que infelizmente é uma tecnologia que ainda está engatinhando aqui no Brasil. Os smartphones e tablets compatíveis com a rede 4G, são caros e a cobertura 4G ainda é bem pequena onde só está disponível nos grandes centros urbanos.

O problema da rede 4G além dos que eu já citei que são aparelhos caros e cobertura limitada por parte das operadoras, é a famosa franquia de dados de internet. Do que adianta as operadoras oferecerem a rede 4G com altíssimas velocidades de internet, se a franquia de dados é muito baixa? Em uma rede 4G, uma franquia de dados de 1 Gigabyte (1GB) pode ser consumida em poucos minutos e deixar o consumidor a ver navios na qual ele vai ficar com uma velocidade super reduzida sem conseguir navegar direito na internet.

Para evoluirmos em termos de navegação na internet, essas franquias de dados devem ser aumentadas pelas operadoras, se não, não adianta nada super velocidades de internet, se a franquia de dados vai acabar mais rápido ainda. Vamos ver se daqui pra frente, essa coisa de franquia de dados de internet melhora pra nós consumidores nas redes 3G e 4G. E você, o que achou das diferentes redes que existem para internet móvel 2G, 3G e 4G?

sexta-feira, 17 de março de 2017

Maximum Sizes on FAT16 FAT32 Volumes

Maximum Volume Sizes

The maximum size of a volume depends on the file system used to format the volume. Windows 2000 allows you to format volumes with three different file systems: NTFS, FAT16, and FAT32.
Windows 2000 has the capability to combine noncontiguous disk areas when creating volume sets and stripe sets, but these volumes have the same maximum size limitations of a single volume.

Maximum Sizes on FAT16 Volumes

FAT16 can support a maximum of 65,535 clusters per volume. Table 3.10 lists FAT16 size limits.
important-icon
Important
For Windows NT and Windows 2000, the cluster size of FAT16 volumes between 2 GB and 4 GB is 64 KB. This cluster size is known to create compatibility issues with some applications. For this reason, it is recommended that FAT32 be used on volumes that are between 2 GB and 4 GB. One of the known compatibility issues involves setup programs that do not compute volume free space properly on a volume with 64 KB clusters and will not run because of a perceived lack of free space. The Format program in Windows 2000 displays a warning and asks for a confirmation before formatting a volume with 64 KB clusters.
Table 3.10 FAT16 Size Limits
Description
Limit
Maximum file size
32 - 1 bytes
Maximum volume size
4 GB
Files per volume
16

Maximum Sizes on FAT32 Volumes

The FAT32 volume must have at least 65,527 clusters. The maximum number of clusters on a FAT32 volume is 4,177,918. Windows 2000 creates volumes up to 32 GB, but you can use larger volumes created by other operating systems such as Windows 98. Table 3.11 lists FAT32 size limits.
Table 3.11 FAT32 Size Limits
Description
Limit
Maximum file size
32 - 1 bytes
Maximum volume size
32 GB (This is due to the Windows 2000 format utility. The maximum volume size that Windows 98 can create is 127.53 GB).
Files per volume
Approximately 4 million
important-icon
Important
Windows 2000 can format new FAT32 volumes up to 32 GB in size but can mount larger volumes (for example, up to 127.53 GB and 4,177,918 clusters from a volume formatted with the limits of Windows 98). It is possible to mount volumes that exceed these limits, but doing so has not been tested and is not recommended.

Maximum Sizes on NTFS Volumes

In theory, the maximum NTFS volume size is 2 32 clusters. However, even if there were hardware available to supply a logical volume of that capacity, there are other limitations to the maximum size of a volume.
One of these limitations is partition tables. By industry standards, partition tables are limited to 2 32 sectors. Sector size, another limitation, is a function of hardware and industry standards, and is typically 512 bytes. While sector sizes might increase in the future, the current size puts a limit on a single volume of 2 terabytes (2 32 * 512 bytes, or 2 41 bytes).
For now, 2 terabytes should be considered the practical limit for both physical and logical volumes using NTFS.
The maximum number of files on an NTFS volume is 2 32 - 1. Table 3.12 lists NTFS size limits.
Table 3.12 NTFS Size Limits
Description
Limit
Maximum file size
64 bytes - 1 KB (On disk format)
44 bytes - 64 KB (Implementation)
Maximum volume size
64 allocation units (On disk format)
32 allocation units (Implementation)
Files per volume
32 - 1

RS232 flow control and handshaking

You would probably prefer the first method where your helper stops for a small period. To achieve this, there will be some communication, eye-contact, a yell, or something like that to stop him from throwing new apples. How simple, but is it always this simple? Consider the situation where one computer device sends information to another using a serial connection. Now and then, the receiver needs to do some actions, to write the contents of its buffers to disk for example. In this period of time no new information can be received. Some communication back to the sender is needed to stop the flow of bytes on the line. A method must be present to tell the sender to pause. To do this, both software and hardware protocols have been defined.

Software flow control

Both software and hardware flow control need software to perform the handshaking task. This makes the term software flow control somewhat misleading. What is ment is that with hardware flow control, additional lines are present in the communication cable which signal handshaking conditions. With software flow control, which is also known under the name XON-XOFF flow control, bytes are sent to the sender using the standard communication lines.
Using hardware flow control implies, that more lines must be present between the sender and the receiver, leading to a thicker and more expensive cable. Therefore, software flow control is a good alternative if it is not needed to gain maximum performance in communications. Software flow control makes use of the datachannel between the two devices which reduces the bandwidth. The reduce of bandwidth is in most cases however not so astonishing that it is a reason to not use it.
Two bytes have been predefined in the ASCII character set to be used with software flow control. These bytes are named XOFF and XON, because they can stop and restart transmitting. The bytevalue of XOFF is 19, it can be simulated by pressing Ctrl-S on the keyboard. XON has the value 17 assigned which is equivalent to Ctrl-Q.
Using software flow control is easy. If sending of characters must be postponed, the character XOFF is sent on the line, to restart the communication again XON is used. Sending the XOFF character only stops the communication in the direction of the device which issued the XOFF.
This method has a few disadvantages. One is already discussed: using bytes on the communication channel takes up some bandwidth. One other reason is more severe. Handshaking is mostly used to prevent an overrun of the receiver buffer, the buffer in memory used to store the recently received bytes. If an overrun occurs, this affects the way newcoming characters on the communication channel are handled. In the worst case where software has been designed badly, these characters are thrown away without checking them. If such a character is XOFF or XON, the flow of communication can be severely damaged. The sender will continuously supply new information if the XOFF is lost, or never send new information if no XON was received.
This also holds for communication lines where signal quality is bad. What happens if the XOFF or XON message is not received clearly because of noise on the line? Special precaution is also necessary that the information sent does not contain the XON or XOFF characters as information bytes.
Therefore, serial communication using software flow control is only acceptable when communication speeds are not to high, and the probability that buffer overruns or data damage occur are minimal.

Hardware flow control

Hardware flow control is superior compared to software flow control using the XON and XOFF characters. The main problem is, that an extra investment is needed. Extra lines are necessary in the communication cable to carry the handshaking information.
Hardware flow control is sometimes referred to as RTS / CTS flow control. This term mentions the extra input and outputs used on the serial device to perform this type of handshaking. RTS / CTS in its original outlook is used for handshaking between a computer and a device connected to it such as a modem.
First, the computer sets its RTS line to signal the device that some information is present. The device checks if there is room to receive the information and if so, it sets the CTS line to start the transfer. When using a null modem connection, this is somewhat different. There are two ways to handle this type of handshaking in that sitiuation.
One is, where the RTS of each side is connected with the CTS side of the other. In that way, the communication protocol differs somewhat from the original one. The RTS output of computer A signals computer B that A is capable of receiving information, rather than a request for sending information as in the original configuration. This type of communication can be performed with a null modem cable for full handshaking. Although using this cable is not completely compatible with the original way hardware flow control was designed, if software is properly designed for it it can achieve the highest possible speed because no overhead is present for requesting on the RTS line and answering on the CTS line.
In the second situation of null modem communication with hardware flow control, the software side looks quite similar to the original use of the handshaking lines. The CTS and RTS lines of one device are connected directly to each other. This means, that the request to send query answers itself. As soon as the RTS output is set, the CTS input will detect a high logical value indicating that sending of information is allowed. This implies, that information will always be sent as soon as sending is requested by a device if no further checking is present. To prevent this from happening, two other pins on the connector are used, the data set ready DSR and the data terminal ready DTR. These two lines indicate if the device attached is working properly and willing to accept data. When these lines are cross-connected (as in most null modem cables) flow control can be performed using these lines. A DTR output is set, if that computer accepts incomming characters.

Using interrupts on a PC


The interrupt mechanism present on PC's is controlled by an interrupt management chip, the programmable interrupt controller PIC. The chip used on XT's is a 8259A device capable of handling 8 hardware interrupts. When an interrupt occurs on one of the input lines, the processor's INTR line is activated bij the PIC. The PIC is responsible of handling the priority when two or more interrupts occur at nearly the same time.
To be able to use more hardware in combination with the computer, a second interrupt controller was added to the AT compatible systems. To make this work, the secondary controller uses one interrupt line on the existing one. This means that in the AT configuration, only 7 interrupt lines on the first controller can be used and 8 on the second. This makes a total of 15 possible hardware interrupts which is enough for most situations. To stay backward compatible with older applications, the hardware line of IRQ 2 on XT systems was redirected to IRQ 9 on the second. BIOS then redirects a hardware interrupt on line 9 to the handler of IRQ 2. In this way, the interrupt service routine present for IRQ 2 is called, even if the occuring interrupt is the IRQ 9.
The primary PIC is accessible at I/O port 0x20 and 0x21, the secondary at 0xA0 and 0xA1. These ports must be used for two different reasons by an application accessing hardware using interrupts. One is, that the PIC must be told that an interupt on a specific line may be honored and sent to the processor. The other use is to reset the interrupt when the software has finshed performing all necessary actions.
Priority can be an important issue when performing serial communications. The amount of interrupts occuring when communicating can be pretty high. If no buffering is used, each single incomming byte will be announced by a trigger signal on the interrupt line. When buffering is present (as on most UARTS used today) this will decrease to about one interrupt every fourteen bytes. Still a high amount of interrupts compared to the number of information comming in. This number will double, when also interrupt driven sending is used, not even mentioning the interrupts when modem signals are checked using interrupts.

Interrupt service routines

The piece of software started when an interrupt occurs is called an interrupt service routine, or ISR. BIOS maintains a table of the starting address of all possible routines in the address range 0000:0000 to 0000:0400. A total of 256 routines are available (most are called by software only). Default, the hardware interrupts are redirected by this table to a small BIOS routine which clears the interrupt and then exits. To make your software interrupt aware, this default routine must be replaced by your own software.
Changing the address of an ISR can be done by changing bytes directly on the memory location in the table, or better, using a DOS software interrupt designed for it. Please refer to your compiler documentation what's the best way to do this. The following table shows the offset in the table of the hardware interrupts.
ISR number for each hardware interrupt
Hardware interruptSoftware interruptDefault use
00x08Clock
10x09Keyboard
20x0ASecundary PIC cascade
30x0BCOM2 and COM4
40x0CCOM1 and COM3
50x0DLPT2 / harddisk on the XT
60x0EFloppy disk
70x0FLPT1
80x70Real time clock
90x71Generates software interrupt 0x0A
100x72
110x73
120x74
130x75Mathematic co-processor
140x76IDE harddisk
150x77
The appropriate table entry must contain the segmented starting address of the function in the application program handling the interrupts. This function must end with a call to the IRET function. This means, that a normal function inside your program cannot be used as an interrupt service routine. In C/C++ for example, the keyword interrupt must be used in front of the function declaration to generate the necessary assembly instructions for this. Refer to your own compiler manual for details.
When an interrupt occurs, the software must check the IIR interrupt identification register to see which event caused the interrupt to occur. If more than one UART share the same IRQ level, be sure to check the IIR register of all the UART's used in your application program with the same IRQ. See the programming examples for details.

The priority scheme

The PIC maintains a priority scheme, where lower IRQ numbers have a higher priority than the higher ones. It honors new interrupts when the processor is busy processing another one, as long as the IRQ number of the new interrupt is lower than the currently occuring one. Therefore, playing around with the interrupt numbers assigned to different devices in your computer can decrease or increase the maximum allowed speed of serial communications. Be aware, that the system assumes most hardware to exist on a particular interrupt level, so look out what you are doing. Changing the interrupt level of harddisks, floppydrives etc. is not a good idea in general, but changing the interrupt level of a network card may produce good results.

Enabling interrupts

An interrupt is not occuring, unless the PIC is told that it is allowed to pass an interrupt through. This means, that the PIC must be programmed to allow an UART to perform interrupt driven communication. For this reason, the PIC's interrupt mask register IMR can be used. This register is present at I/O port 0x21 for the first, and 0xA1 for the second PIC controller.
The eight bits of the IMR mask register each control the behaviour of one interrupt line. If a bit is set to zero, the accompaning interrupt will be passed through to the processor. The IRQ 2 is a somewhat special case on AT class computers. To make this interrupt occur, both the IRQ 2 and IRQ 9 bits must be cleared on AT computers. the IRQ 2 bit will already be cleared in most circumstances to let other interrupts on the secondary PIC occur. The bit of IRQ 9 must also be cleared, which is not compatible with the original way of enabling the IRQ 2 on a XT computer.
Because of this difference with the XT computer, older software written for the XT and using IRQ 2 won't be able to use this IRQ. Designers tried to make the AT PIC configuration as compatible as possible by redirecting to IRQ 2, adding extra's to the BIOS software etc, but they forgot this little IMR register which controls the visibility of the interrupt to the software.
Changing the IMR is easy. First read in the byte on I/O address 0x21 or 0xA1, using the function present in assembly or in your compiler. Then clear the bit and write the information back to the same location. Be sure to set the same bit when exiting the application. Otherwise, when new characters are received on the line when the application is not working anymore, the PIC will trigger the software interrupt routine on your machine which may lead to strange behaviour of the computer, including a complete crash.
The most important situation that must be taken care of is pressing the Ctrl-C button. In most applications, pressing this key will stop the program immediately, without restoring the IMR state, and even more important, the interrupt service routine address. The interrupt service routine is the function called when an interrupt occurs. This is a piece of software inside your application, but when the program is exited, other code will be present on the same memory location. If the interrupt service routine starting address is not changed in the BIOS table, the BIOS will still perform a call to the same memory location when an interrupt occurs after the program has exited. Should I explain further?

Acknowledging interrupts

The PIC will prevent new interrupts of the same or lower priority, as long as the software has not finished processing the previous one. This means, that the software must signal the PIC that new interrupts may occur, otherwise the computer will eventually hang.
Clearing the interrupt state of the PIC can be done by writing a non-specific end of interrupt command to the PIC's register available at address 0x20 for the first, and 0xA0 for the second controller. This command consists of the byte value 0x20. If an interrupt has occured on the second PIC, it is not necessary to reset both the state of the first and the second controller. Only the second controller needs an end of interrupt command. The BIOS has already cleared the state on the first controller before calling the interrupt service routine.

PC COM Ports I/O and IRQ use

On PC's the register set of a UART is mapped in the I/O map of the processor. The twelve registers of the UART are accessible with 8 I/O bytes. To achieve this, read-only and write only registers are accessible using to the same PC I/O port when possible. In two situations, a bit (the divisor latch access bit) in one register is used to swap different registers on a specific port.
Four serial communication devices have been predefined on a PC. The UART's for these devices have default addresses assigned on which its registers are accessible. The devices are named COM1 through COM4. A default interrupt line number is also assigned to each device. Because only a few IRQ lines are available on PC systems, only two interrupt lines are used for four devices. The software must have the intelligence to detect which UART needs attention when an interrupt occurs if more than one UART share the same interrupt.
Default I/O addresses and IRQ's on a PC system
DeviceI/O address rangeIRQ
COM10x3F8 - 0x3FF4
COM20x2F8 - 0x2FF3
COM30x3E8 - 0x3EF4
COM40x2E8 - 0x2EF3
Please note, that the table lists only the default I/O addresses on IBM XT and AT compatible systems. On a PS2 system, other addresses are used. These values are only recommendations. If other hardware present in the computer makes it necessary, it is possible to move an UART to another I/O address or IRQ.
The actual I/O addresses used are stored in a table in the BIOS data area. This table starts at memory address 0000:0400 and contains important device information. Unfortunately, only the I/O port addresses of each UART is stored. No information is present about the IRQ used by a specific UART. This makes the table only partially useful for identifying the serial ports.

quinta-feira, 16 de março de 2017

Storage : RAID Overview

RAID is an acronym that originally stood for “redundant array of inexpensive disks”. Today it commonly refers to “redundant array of independent disks”. With the advent of solid state drives, more and more people are referring to RAID as “redundant array of independent drives”. RAID technology combines multiple physical storage drives to create a logical drive that spans across all the physical drives. Data
is then written across the multiple drives instead of on a single drive. The logical drive or logical volume appears as a physical drive to the operating system of the host that stores data on it. The group of drives from which the logical volume is created is called a RAID set or a RAID group. RAID has two primary functions. The first one is to protect data against failed drives. The second function is to improve I/O performance by serving I/Os from multiple drives in a parallel manner. RAID can be implemented either in software or hardware. Software RAID is implemented at a host’s operating system level. It uses host resources such as processor and memory. For certain RAID techniques this may not pose a performance problem. However, for RAID techniques that use parities, software RAID may lead to significant performance problems. Hardware RAID can be implemented either on a host or on a storage system.
An integrated RAID controller present on the motherboard implements hardware RAID on a host. Alternately, a RAID controller expansion card can be attached to the host. The hardware RAID controller has its own processor and memory. Because of this there is no RAID overhead on the host’s processor. RAID controllers are also implemented on a storage system. Hardware RAID implemented on storage systems offers several benefits over hardware RAID implemented on the server hardware.
Typically there are redundant RAID controllers on storage system, which provides high availability. Storage systems also support multiple drive types, which may not be supported by the RAID controller on a host. Apart from this, storage systems have large caches and multiple intelligent features. This provides better I/O performance, protection, and reliability. Let’s take a look at the different RAID techniques. There are three main RAID techniques – striping, mirroring, and parity. In the striping technique, data is written
simultaneously across all the storage drives in a RAID set. Each write to the logical volume spreads the data across all the drives in the RAID set. And each read retrieves data simultaneously from all the drives in the RAID set. Because the I/O is performed in parallel across multiple drives, this significantly improves read/write performance. However, striping does not protect data against drive failure.
In the mirroring technique, the data is stored on two storage drives giving two mirrored copies of the same data. This protects the data in case one drive fails. When a failed drive is replaced with a new one, the RAID controller rebuilds data on the new drive from the remaining intact drive. Mirroring may improve read performance by reading data simultaneously from both the drives in the pair. However, not all RAID
controllers implement this. Mirroring makes writes slower because each write results in an additional write to the other drive in the pair. With mirroring you also need twice as much capacity to store data.
In the parity technique, data is striped across all drives except one in a RAID set. The last drive stores a parity value that is computed by performing an Exclusive-OR operation on the striped data. In case a drive fails, data can still be recovered by using the parity and the data on the remaining drives. In this way, parity protects data against drive failure without the need to mirror it. At the same time, it also improves read performance because it uses striping. However write performance is affected because, each time the data changes, parity has to be recalculated. 

Architecture of a storage system

So let’s begin by talking about storage systems, the architecture of a storage system, and the different types of storage systems. A storage system is a hardware component that houses multiple storage drives within a cabinet. These drives provide a very high capacity storage pool for enterprise use. A storage system is also called a storage array. A large storage system can provide many petabytes of storage capacity. A storage system has two primary components: the storage drives and the storage controller. The storage drives in a storage system can be either disk drives, or SSDs, or a combination of both these drive types. Multiple drives of the same type are arranged in a drive enclosure. Multiple drive enclosures are then assembled inside a storage system cabinet. The storage system cabinet has integrated
power supply and cooling systems.

 Apart from the storage drives, the other key component of a storage system is the storage controller.
A storage controller is a computer that is housed in the storage system cabinet along
with the drive enclosures. The storage controller has a processor, memory, and cache. A specialized operating system is installed on the storage controller that manages the storage system. A storage system may also have two storage controllers for high availability. In some implementations the storage controller may even be connected externally to the storage system. This connection may be a direct one
or over a network. The OS of the controller essentially provides intelligence to the storage system. It enables the storage system to meet enterprise requirements such as capacity, scalability, performance, business continuity, and security. If you recall, we covered these requirements in the first week of this course.
The OS manages capacity and provisioning, and also provides a number of enhanced features such as storage pooling, security, and capacity optimization. We will cover these features in a later video in the course. Servers connect to a storage system through the storage controller. As we discussed in the first week, servers may connect to a storage system either directly or over a storage network. With networked storage, multiple servers can connect to a storage system over a storage network, such as a 10 Gbps Ethernet or a 16 Gbps Fibre Channel network. 

Storage Protocols

In computing, a protocol is a set of commands and rules that enables  two entities to communicate
with each other. In storage, there are various protocols that allow a server and a storage
 device to connect and exchange data.  Usually each protocol also provides its own
physical interface specifications. This affects the type of connectors and the cables that
are used to connect the  storage drive to a server.
There various protocols for connecting a storage drive to a server. Some common protocols are :

Serial Advanced Technology  Attachment or SATA,
Small Computer System Interface or
SCSI,
 Serial Attached SCSI or SAS,
Nearline SAS or NL-SAS, and
Fibre Channel or FC.

On the storage device, these protocols are implemented on the drive controller.
 On the  server, these protocols are either implemented on the motherboard or
 by using adapters that  plug into the motherboard. These protocols are applicable
to both DAS and networked storage  environments.

Sata

The SATA interface is commonly found in consumer desktops and laptops. In enterprises, they provide cheap,  low performance, and high capacity  storage. SATA drives are typically used
for data backups and archiving.

SCSI

The SCSI interface is popular for enterprise  storage. SCSI drives provide parallel transmission
and are used for high-performance,  mission-critical workloads.
SAS is the serial point-to-point variant of the SCSI  protocol, and it is also used in
 high-end computing.

Nearline SAS or NL-SAS

Is a hybrid of SAS  and SATA interfaces. Theses drives have a SAS interface and support
 the SAS protocol. But in the back end they use SATA drives. They cost lesser than
SAS drives, and provide the benefit of the SCSI command set. At the same time they
offer the large capacities  of SATA drives.

The Fibre Channel or FC protocol

Is also based on the SCSI protocol. FC is a widely used standard for networked storage.
It provides very high throughputs, with the latest standard supporting transfer
rates of up to 16 gigabits  per second. When it comes to disk drives, there is always
a  connection between the RPM and the capacity of a disk drive. Usually, drives with higher
RPM  have lower capacity. Conversely, drives with lower RPM have a higher capacity.
Therefore high speed drives usually implement the protocols that provide better performance.
 For example, 10K and 15K drives are usually SAS and FC drives.
On the other hand, 5.4K and 7.2K drives  are usually SATA drives.

sexta-feira, 3 de março de 2017

Ler mensagens do Gmail em outros clientes de e-mail usando IMAP

Você pode ler suas mensagens do Gmail em outros clientes de e-mail, como Microsoft Outlook e Apple Mail, usando IMAP. Quando você usa o IMAP, pode ler as mensagens do Gmail em vários dispositivos, e as mensagens são sincronizadas em tempo real.


Configurar o IMAP

Etapa 1: verificar se o IMAP está ativado

  1. Abra o Gmail no computador.
  2. No canto superior direito, clique em Configurações Configurações.
  3. Clique em Configurações.
  4. Clique na guia Encaminhamento e POP/IMAP.
  5. Na seção "Acesso IMAP", selecione Ativar IMAP.
  6. Clique em Salvar alterações.

Etapa 2: alterar as configurações de IMAP no seu cliente de e-mail

Use a tabela abaixo para atualizar seu cliente com as informações corretas. Para encontrar ajuda para atualizar as configurações, pesquise instruções sobre como configurar o IMAP na Central de Ajuda do seu cliente de e-mail.
Servidor de recebimento de e-mails (IMAP)
imap.gmail.com
Requer SSL: Sim
Porta: 993
Servidor de envio de e-mails (SMTP)
smtp.gmail.com
Requer SSL: Sim
Requer TLS: Sim (se disponível)
Requer autenticação: Sim
Porta para SSL: 465
Porta para TLS/STARTTLS: 587
Nome completo ou Nome de exibiçãoSeu nome
Nome da conta, Nome do usuário ou Endereço de e-mailSeu endereço de e-mail completo
SenhaSua senha do Gmail