Wednesday, December 24, 2008

Use Process.Start to open files in their native applications

Use Process.Start to open files in their native applications

Dim ps As System.Diagnostics.Process

ps = Process.Start("C:\dhad.pdf")

or

ps = Process.Start("C:\text.txt")


to open Internet explorer Using Vb.net

Dim ps As System.Diagnostics.Process
'to open
ps = Process.Start("IEXPLORE.EXE")

'to kill

ps.kill()

Visual Basic : Generate A Csv File

First Add reference to Microsoft Activex Data Objects 2.5 Library

Option Explicit

Private m_cnDatabase As ADODB.Connection


Private Sub cmdExport_Click()

Call ExportToCVS("tbl_Watcher")

End Sub

Private Sub Form_Load()

Set m_cnDatabase = New ADODB.Connection

With m_cnDatabase

.CursorLocation = adUseClient

.ConnectionString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=D:\folder\Project1\AccessDirectory.mdb;"

.Open

End With

End Sub

Private Sub ExportToCVS(ByRef sTable As String)

Dim sExportLine As String

Dim rsData As ADODB.Recordset

Dim sSql As String

Dim hFile As Long

Dim oField As ADODB.Field

On Error GoTo PROC_ERR

' ' Open the table. '

Set rsData = New ADODB.Recordset

With rsData

.ActiveConnection = m_cnDatabase

.CursorLocation = adUseClient

.CursorType = adOpenForwardOnly

.LockType = adLockReadOnly

.Source = "SELECT * FROM " & sTable

.Open

If (.State = adStateOpen) Then

hFile = FreeFile Open "C:\Temp\" & sTable & ".CSV" For Output As hFile

' Print file header with fieldnames.

sExportLine = ""

For Each oField In .Fields

sExportLine = sExportLine & oField.Name & ","

Next

sExportLine = VBA.Left$(sExportLine, Len(sExportLine) - 1)

Print #hFile, sExportLine


Do Until .EOF

sExportLine = ""

For Each oField In .Fields

sExportLine = sExportLine & oField.Value & ","Next

sExportLine = VBA.Left$(sExportLine, Len(sExportLine) - 1)

Print #hFile, sExportLine

.MoveNext

Loop

End If

End With

PROC_EXIT: ' ' Clean up and exit gracefully. '

If (Not rsData Is Nothing) Then

With rsData

If (.State <> adStateClosed) Then

.Close

End If

End With

End If

If (hFile <> 0) Then

Close hFile

End If

PROC_ERR:

Select Case Err.Number

Case Is <> 0

MsgBox "Error " & Err.Number & " (" & Err.Description & ") in procedure ExportToCVS of Form frmMain"

Err.Clear

Resume PROC_EXIT

End Select

End Sub

Monday, December 22, 2008

C# Tutorial For Beginners - Fourth tutorial

Fourth tutorial

Congratulation you would soon be able to hack CsGL but there is one last step you should understand : interop (with C code).You will need a C compiler, I advise gcc for windows called MinGW, it's free, it's good, it's GCC!We will create 3 file:

echo.c

#include

#define DLLOBJECT __declspec(dllexport)

DLLOBJECT void writeln(char* s)
{
printf("%s\n", s);
}
echo.cs using System;
using System.Runtime.InteropServices;

namespace HelloUtil
{
public class Echo
{
[DllImport("echo.native.dll", CallingConvention=CallingConvention.Cdecl)]
static extern void writeln(string s);

string myString;

public Echo(string aString)
{
myString = aString;
}

public void Tell()
{
writeln(myString);
}
}
}
hello.cs using System;
using HelloUtil;

public class Hello
{
public static void Main()
{
Echo h = new Echo("Hello my 1st interop code !");
h.Tell();
}
}
Hehe, here you discover a completly new thing, Attribute."[DllImport(.." is an attribute.You could tag any method/field/class with any number of attribute.They generate extra information that could be used by anyone who could understand them.This DllImport attribute is understand by the compiler and told him that the function below is in fact in a DLL whose name is "echo.native.dll". I add a calling convention parameter as the default .NET calling convention is __stdcall whereas, in C, it's __cdecl.By the way, if you look for DllImport in the documentation, look for DllImportAttribute, because you remove "Attribute" to attribute classname when using them, it's like this.
And now let's compile this! > csc /nologo /t:library /out:echo.dll echo.cs
> csc /nologo /out:hello.exe /r:echo.dll hello.cs
>
> rem "if the following line don't work, read bellow.."
> gcc -shared -o echo.native.dll echo.c
> strip echo.native.dll
the 2 last line (the gcc & strip command) are for building the "C-DLL".If they don't work maybe gcc is not in a directory listed in your path environment variable ? check with: %lt; echo %PATH%Well it's probably not,anyway, so type, assumin mingc is in C:\MinGW: set PATH=C:\MinGW;%PATH%And try again... you sure it's not a syntax error ?If it compile test it now: helloGreat isn't it ?
Now I should admit I didn't tell you all the truth. echo.dll and echo.native.dll are not the same kind of DLL. It's not just the language (C / C#) the C one is a plain executable full of, probably, x86 instruction, whereas the C# one is what MS call a portable executable.. anyway they are different.If you install echo.dll in the GAC it wont work because it won't find echo.native.dll except if you put in into the PATH (like C:\Windows\System32).In the same manner when you add the reference in VS.NET echo.native.dll is overlooked and your program won't work....So either put the native one in your path or copy it in the debug/release directory of VS.NET.Or do everything by hand (makefile? build.bat?) and put all your dll in you build directory, and everything work fine..

C# Tutorial For Beginners - Third tutorial

Third tutorial

Now you become to be pretty confident, I guess, so we could start using multiple file, and even a dll ? go into an other directory (or stay in this one, I won't mind) and create 2 file:hello.cs using System;

public class Hello
{
public static void Main()
{
HelloUtil.Echo h = new HelloUtil.Echo("Hello my 1st C# object !");
h.Tell();
}
}
echo.cs using System;

namespace HelloUtil
{
public class Echo
{
string myString;

public Echo(string aString)
{
myString = aString;
}

public void Tell()
{
Console.WriteLine(myString);
}
}
}
Note in hello.cs I have used the syntax "HelloUtil.Echo" it's because Echo is in the namespace HelloUtil, you could have typed (at he start of the file) using HelloUtil and avoid HelloUtil., that's the way namespace work.
Now you could compile both in one .exe with > csc /nologo /out:hello.exe *.csBut it's not my intention, no.Well.(Have you tried?)Let's go building a DLL: > csc /nologo /t:library /out:echo.dll echo.csthat's it (dir will confirm).Now we could use it ... > csc /out:hello.exe /r:echo.dll hello.cs if you typed "hello" it will worked as usual..., but if you delete "echo.dll" the program will now crash: it use the DLL. You could also change Echo.cs, rebuild the DLL and see... that's the advantage of DLL!
You could also put your DLL in the global assembly cache (GAC), and any program would be able to access it, even if the DLL is not in its directory! to put it in the GAC, I sugest you read MS doc but here are the unexplained step:
create your assembly key, create it once and use it for every version. you create it with: sn -k myKeyName.snkthe .snk file should be in your compilation directory (the one where your run csc)
create a strong asssembly title by adding in any .cs source file the following directive at top level: using System.Reflection;
using System.Runtime.CompilerServices;
[assembly: AssemblyTitle("My Lib Title")]
[assembly: AssemblyVersion("1.2.3.4")]
[assembly: AssemblyKeyFile("myKeyName.snk")]

now add it to the GAC: > gacutil.exe /if myLib.dll
By the way, did I tell you ? when I referenced the hello.dll while compiling, remember? csc /out:hello.exe /r:echo.dll hello.cs, it could have been any assembly, even a .exe !!!

C# Tutorial For Beginners Second tutorial

Second tutorial

Congratulation you've done the most difficult, let increase the difficulty. and create an object instance. in the DOS shell create a new directory: > md ..\learncs2
> cd ..\learncs2
> notepad hello.cs
and then type, in the notepad using System;

public class Echo
{
string myString;

public Echo(string aString)
{
myString = aString;
}

public void Tell()
{
Console.WriteLine(myString);
}
}

public class Hello
{
public static void Main()
{
Echo h = new Echo("Hello my 1st C# object !");
h.Tell();
}
}
Wouah, 25 lines! That's a program! Save it, compile it, run it...What happened? csc look for a Main() function in your program, it should find one (and only one) and it will be the entry point of your program.In this tutorial we create 2 classes: Echo & Hello. In the Main() method you create an Echo object (an instance of the Echo class) with the keyword newThen we called the instance method "Tell()".the upper case letter on class or Method is just a MS convention, do as it pleased you.public is a visibility access, method wich are not public could not be seen from outside, there is also other visibility keywords, to learn more, clic on Start menu-> Programs -> Microsoft .NET Framework SDK -> Documentation there is a search window, an index window, etc... try to learn more about public private protected.

C# Tutorial For Beginners- First tutorial

You should first open a DOS command shell. (If you don't know what it is, clic on the Start menu then run (at the bottom) and type, in the text field: "cmd".exercise: there is an easiest way to do that, try to find it.) You should begin to work in an empty directory for this. let call it "C:\learncs". Type in the shell: > md C:\learncs
> cd C:\learncs
> C:
Now you should create your first C# program, type "notepad hello.cs" and type (in the notepad) using System;

public class Hello
{
public static void Main()
{
Console.WriteLine("Hello C# World :-)");
}
}
the using keyword just let you write Console at line 7, instead of System.Console. It's very usefull shortcut when you use a lot of "class" define in System.Save the file.Now you could compile. Type in the DOS Shell again and type: csc /nologo /out:hello.exe hello.csYou probaly have some errors, correct them, compile again, and now you have a working hello.exe program... type hello, see...

Tuesday, December 16, 2008

How to: Determine the User's Domain

You can use the My.User object to get information about the current user. This example shows how to use the My.User.Name property to get the user's domain name if the application uses Windows authentication.

Because the application uses Windows authentication by default, My.User returns the Windows information about the user who started the application.

Example
This example checks if the application uses Windows authentication before parsing the My.User.Name property to determine the domain name.
This example returns an empty string if the application uses custom authentication, because an implementation of custom authentication does not necessarily provide domain information.



Function GetUserDomain() As String

If TypeOf My.User.CurrentPrincipal Is _
Security.Principal.WindowsPrincipal Then
' My.User is using Windows authentication.
' The name format is DOMAIN\USERNAME.
Dim parts() As String = Split(My.User.Name, "\")
Dim domain As String = parts(0)
Return domain
Else
' My.User is using custom authentication.
Return ""
End If
End Function

Source : http://msdn.microsoft.com/en-us/library/ztz70aw9(VS.80).aspx

How to: Determine if a User is in a Group

You can use the My.User object to get information about the current user. This example shows how to use the My.User.IsInRole method to determine if the user is a member of a particular group.

Example
This example uses the My.User.IsInRole method to determine if the user is an administrator before accessing a resource.

If My.User.IsInRole( _
ApplicationServices.BuiltInRole.Administrator) Then
' Insert code to access a resource here.
End If

How to: Determine a User's Login Name

You can use the My.User object to get information about the current user. This example shows how to use the My.User.Name property to get the user's login name.
An application uses Windows authentication by default, so the My.User returns the Windows information about the user who started the application.

Example
This example checks if the application uses Windows or custom authentication, and then uses that information to parse the My.User.Name property.

Function GetUserName() As String

If TypeOf My.User.CurrentPrincipal Is _
Security.Principal.WindowsPrincipal Then
' The application is using Windows authentication.
' The name format is DOMAIN\USERNAME.
Dim parts() As String = Split(My.User.Name, "\")
Dim username As String = parts(1)
Return username
Else
' The application is using custom authentication.
Return My.User.Name
End If

End Function

How to Use the LIKE Operator in Parameter Queries

An easy way to search for records that begin with a certain letter is to create a parameter query that prompts for a character to search for. You can use the LIKE operator with the wildcard character (*) to accomplish this task. This example uses the Employees table in the sample database Northwind.mdb.

Method to Create Parameter Query Using the LIKE Operator


1) Create a new query based on the Employees table.
2) Drag LastName to the Field row and then type the following line in the Criteria row for the LastName field:
LIKE [Enter the first char to search by: ] & "*"
-or-
LIKE "*" & [Enter any char to search by: ] & "*"

NOTE: The Parameters dialog box is a fixed width so all the characters that you type may not display.


3) When you run this query, you will be prompted with the message that you specified in the LIKE statement. The first LIKE statement finds all the last names that begin with the letter that you type into the parameter prompt.

For example, to find records where the last name starts with a "L", type L and begin the search. The second LIKE statement finds all the last names that have the letter that you type into the parameter prompt anywhere in the field.

For example, to find records where the last name has a "L" anywhere in the field, type L and begin the search.

Tuesday, December 9, 2008

Preventing user from showing Task Manager Using VB.NET

Create a form called form1 and make a CheckBox called chkDisableCtr and a button called btnApply in order for it to work.

Imports Microsoft.Win32

Public Class Form1

Private _reg As New Form1.TaskManager

Private Sub chkDisableCtr_CheckedChanged(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles chkDisableCtr.CheckedChanged
Me.btnApply.Enabled = True
End Sub

Private Sub btnApply_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnApply.Click
'write the code to disable/enable the Ctrl+Alt+Deleted combination on Win2k/XP
_reg.SetTaskManager(CType(IIf(chkDisableCtr.Checked, TaskManager.TaskManagerState.Disabled, _
TaskManager.TaskManagerState.Enabled), TaskManager.TaskManagerState))
'disable the button
btnApply.Enabled = False
End Sub

Private Sub Form1_Closing(ByVal sender As Object, ByVal e As System.ComponentModel.CancelEventArgs) Handles MyBase.Closing
_reg.Dispose()
End Sub

Private Sub Loaded(ByVal sender As Object, ByVal e As EventArgs) Handles MyBase.Load
chkDisableCtr.Checked = _reg.GetTaskManagerState = TaskManager.TaskManagerState.Disabled
End Sub

Private Class TaskManager

Implements IDisposable

Public Enum TaskManagerState As Integer
Disabled = 1
Enabled = 0
End Enum

Private _hkcu As RegistryKey = Registry.CurrentUser

Private Const _subKey As String = "Software\Microsoft\Windows\CurrentVersion\Policies\System"

Public Sub SetTaskManager(ByVal _state As TaskManagerState)
Dim reg As RegistryKey = _hkcu.OpenSubKey(_subKey, True)
'if we got nothing, and we are supposed to be disabling it, create the key
If reg Is Nothing AndAlso _state = TaskManagerState.Disabled Then
reg = _hkcu.CreateSubKey(_subKey)
ElseIf reg Is Nothing AndAlso _state = TaskManagerState.Enabled Then
'only come here if we are enabling. we don't need to create the key
reg = _hkcu.CreateSubKey(_subKey)
Exit Sub
End If
'change the valie...
reg.SetValue("DisableTaskMgr", CInt(_state))
End Sub

Public Function GetTaskManagerState() As TaskManagerState
Dim _val As Integer = -1
Dim _reg As RegistryKey = _hkcu.OpenSubKey(_subKey)
'if we got nothing then the task manager is enabled
If _reg Is Nothing Then
Return TaskManagerState.Enabled
Else
_val = CInt(_reg.GetValue("DisableTaskMgr"))
End If
'if we got here there was a value and we need to decode it...
'a value of 1 indicates a disbled task manager...
Return CType(IIf(_val = 1, TaskManagerState.Disabled, TaskManagerState.Enabled), TaskManagerState)

End Function

Protected Overrides Sub Finalize()
Me.Dispose()
MyBase.Finalize()
End Sub

Public Sub Dispose() Implements System.IDisposable.Dispose
Try
_hkcu.Close()
_hkcu = Nothing
GC.SuppressFinalize(Me)
Catch
'you shouldn't have dropped it in the first place
End Try
End Sub
End Class

End Class

Enable and disable regedit program

Hi Friends,
this program is for enable and disable regedit. OK, now open your notepad and type this code :

var vbCancel = 2;
var vbYesNoCancel = 3;
var vbYes = 6;
var vbNo = 7;
var vbQuestion = 32;
var vbInformation = 64;
var natan = WScript.CreateObject("WScript.Shell");
var pesan1 = "Regedit safe guard option:\n\n"+
"[Yes] For regedit deactived\n"+
"[No] Regedit actived.\n"+
"[Cancel] Exit\n\n"+
".::http://allinterviewtips.blogspot.com/::.\n\n"+
"Are you sure to regedit deactived now?"
var tanya = natan.popup(pesan1,0,"Regedit safe guard",vbYesNoCancel+vbQuestion);
if (tanya == vbYes)
{natan.RegWrite("HKCU\\Software\\Microsoft\\Windows\\CurrentVersion"+"\\Policies\\System\\DisableRegistryTools",1,"REG_DWORD");pesan2 = "Regedit Diactived!"natan.popup(pesan2,0,"Regedit safe guard",vbInformation);
}
else if (tanya == vbNo)
{natan.RegWrite("HKCU\\Software\\Microsoft\\Windows\\CurrentVersion"+"\\Policies\\System\\DisableRegistryTools",0,"REG_DWORD");pesan3 = "Regedit Actived!"natan.popup(pesan3,0,"Regedit safe guard",vbInformation);
}
else
{natan.popup("Exit",0,"Regedit safe guard",vbInformation);
}

and last save as *.js
run with double click or right click and chose open with command promt.

How to enable registry when infected by virus

When your registry is being disabled:
First, maybe the administrator disabled it for some restriction purposes
Second, due to virus. Most of the virus disabled the regedit for you to unable to stop the execution of its program.

Here are the solutions for enabling the regedit again.
Use the gpedit.msc to enable the registry editor.

Step 1: Hit the window or click start button then press "r" or simply click the run
Step 2:
type gpedit.msc
Step 3:
Click on User Configuration >> Administrative Templates
Step 4:
Click the System and locate the Disable registry editing tools (Prevent access to registry editing tools ) and double click on it
Step 5: Select the enabled on the optionbutton the click apply.
This will make a policy to prevent access to the registry editing tools, The computer will automatically made the policy.
Step 6: After clicking on apply select the disabled in the option button then click the apply again then click ok button when finished.
The disabled button will make the policy into default, the computer will automatically configured it and becomes a default comfig which is the registry editor can be access by the user.

And Thats it... Try run the regedit.exe... Have Fun!!!!

Run batch file at Windows NT startup with no one logged on

The AutoExNT service allows you to run a batch file, Autoexnt.bat, when you boot NT without having to log on to that computer. AutoExNT.exe is a service which will run the autoexnt.bat file. AutoExNT is an NT Resource Kit utility. You use instexnt install command to install AutoExNT. To allow AutoExNT to function, set the service to start automatically in Control Panel / Services. The documentation says to contact rkinput@microsoft.com for questions or feedback concerning this utility.
Get more information from the Microsoft KB Article.
A sample autoexnt.bat file :

echo "Running AutoExNT.Bat" >> C:\WINNT\LOGS\autoexnt.Log
date /T >> C:\WINNT\LOGS\autoexnt.Log
time /T >> C:\WINNT\LOGS\autoexnt.Log
C:\Perl\Perl.Exe startup.pl >> C:\WINNT\LOGS\autoexnt.Log

An alternative approach for applications is to use the Resource Kit utility SRVANY which allows applications to run as services. It has the advantage of allowing console interaction. The tip also lists non-microsoft approaches to the problem.

Microsoft Baseline Security Analyzer (MBSA)

Microsoft is beginning to release useful security oriented tools. Microsoft Baseline Security Analyzer (MBSA) checks Windows NT 4 SP4 and up, Windows 2000, and Windows XP for common security vulnerabilities. MBSA can be installed on Windows 2000 and Windows XP. MBSA currently performs five checks:

Hotfix checks : scans for missing hotfixes for Windows NT 4, Windows 2000, all system services, SQL 7.0, SQL 2000, and IE 5.01 and later.

Password checks : checks for blank and weak passwords.

Vulnerability checks : scans for security issues and common configuration mistakes in Windows
operating systems (NT4, 2000, and XP).

IIS checks : scans for security issues in IIS 4.0 and 5.0.

SQL vulnerability checks : scans for security issues in SQL 7.0 and 2000.

The tool can be run in GUI mode ( mbsa.exe ) or more usefully for automated periodic checks, in command line mode (mbsacli.exe).

Hide Desktop Icons

To hide all Desktop Icons from Explorer, use the following Windows NT / Windows 2000 / Windows XP registry hack :

Hive: HKEY_CURRENT_USER
Key: Software\Microsoft\Windows\CurrentVersion\Policies\Explorer
Name: NoDesktop
Type: REG_DWORD
Value: 1

With this key enabled, in addition, you cannot right click on the desktop to get a context menu. This is a lockdown option.

Almost all Windows NT registry hacks work for Windows 2000 and Windows XP. Windows NT and Windows XP have the Software\Microsoft\Windows\CurrentVersion\Policies\Explorer key by default. Windows 2000 does not. But if you create the key with the NoDesktop value set to 1, the hack works for Windows 2000 also. When you create the Explorer key under Policies, you will be prompted for a class. Leave it blank.

To hide all Desktop Icons from Explorer but still enable right-clicking on the desktop there is the following registry hack :

Hive: HKEY_CURRENT_USER
Key: Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced
Name: HideIcons Type: REG_DWORD
Value: 1

Internet Connection Sharing with Windows 2000 Professional acts a proxy server

Windows 2000 Professional supports a version of NAT ( network address translator ) called Internet Connection Sharing ( ICS ) . If you have a small office or home network, you can get shared Internet access through a single PC running Windows 2000 Professional or Win98. Get ADSL or a cable modem connect for Professional. ICS provides network address translation, address allocation, and name resolution services for the computers on your small network. It actually acts as a router with NAT, rather than a proxy server. It routes and translates the addressing of the packets into and out of the private network to the Internet.

A network address translator is an ip router defined in RFC 1631 that can translate ip addresses and tcp / udp port numbers of packets as they are being forwarded. The Windows 2000 Professional workstation running ICS services connects to the Internet with your ISPs provided ip address and acts as DHCP allocator, DNS proxy, and router for the other PCs in your private network needing access to the Internet. The PCs in your private network are given ip addresses from the the private network 192.168.0.0 with subnet mask 255.255.255.0, reserved by RFC 1918.

The ICS-enabled Windows 2000 Professional workstation is multihomed with one nic connected to the Internet and the other nic connected to your private network. One of the nics could be a modem but its not practical to share access that way except via ISDN, ADSL, or cable modem. The ICS-enabled W2K workstation's nic should have the address 192.168.0.1. It acts as a gateway for the client PCs.

For outgoing and incoming packets, the source private ip address and tcp / udp port are mapped by ICS to the ISPs ip address and ports. To enable ICS:
In Control Panel, double-click the Network and Dial Up Connections
Right-click on the icon that represents the connection that is to be configured for sharing and choose Properties
Click on the Sharing tab and put a check in the box "Enable Internet Connection for this Connection"
If the connection that is to be shared is a dial-up connection, check the box "Enable On Demand Dialing"
A warning appears concerning connectivity with other members of the network being lost, choose Yes, and continue.

You should not use ICS if computers on your network use static TCP/IP addresses, if there is a Windows 2000 domain controller on the network, other DNS servers, other DHCP servers, or gateways configured on the network. This is because ICS creates a static address for the NIC and allocates TCP/IP addresses to the other computers on your network. If there are other DHCP or DNS servers on the network, multiple problems will occur. Here are some common problems and their solutions when implementing ICS:

The error message: Cannot enable shared access. Error 783: Internet Connection Sharing cannot be enabled. The LAN connection selected as the private network is either not present, or is disconnected from the network. Please ensure that the LAN adapter is connected before enabling Internet Connection Sharing Solution: This problem occurs when the address 192.168.0.1 is already in use on the network. To work around this problem, either change the IP address of the computer that is using this number, or, disconnect the computer from the network.

Access to the intranet may be extremely slow when ICS is enabled. This difficulty occurs with no other discernable problems on the network, no conflicts with IP addresses, no DSL or phone connectivity problems, and no conflicts with DNS or DHCP servers on the network. Solution: Oddly enough, this problem can occur if the host computer has multiple NICs that are manufactured by 3Com, and more specifically, the 3Com PCI 3C905B. If there are multiple cards on the host computer and they all share the same IRQ, this problem can occur. Replace the cards.

A problem may occur in Windows 2000 Professional machines that use PPP over Ethernet for the outbound connection, and also ICS. These clients may have trouble sending email with attachments or browsing certain web sites. Solution: PPoE requires a maximum transmission unit (mtu) setting on all client computers to be less that 1,492. The default size is 1,500. Changing the size of the MTU may solve this problem.

After upgrading a Windows 98 SE machine to Windows 2000 Professional, ICS no longer works. Solution: The ICS settings are not automatically migrated when this upgrade is performed. To solve this problem, simply reconfigure the ICS settings on the upgraded computer.
Additional resources:
Windows XP Internet Connection Sharing
Configuring the ICS Computer
Win98 Annoyances ICS
HelpWithWindows' How to install ICS
Q234815 : Description of Internet Connection Sharing
WinGate Another solution
Midpoint Gatways Another solution
Comsocks Another solution

Enable / Disable Task Manager in Windows 2000

There is a registry hack to enable or disable Windows NT TaskManager. The same registry hack applies to Windows 2000 and Windows XP.
Hive: HKEY_CURRENT_USER Key: Software\Microsoft\Windows\CurrentVersion\Policies\System Name: DisableTaskMgr Type: REG_DWORDValue: 1=Enablethis key, that is DISABLE TaskManagerValue: 0=Disablethis key, that is Don't Disable, Enable TaskManager

As part of the enhanced management available in Windows 2000 and Windows XP, rather than risking a registry change, as an administrator you can enable or disable Windows 2000 Pro or Windows XP Pro's TaskManager using Group Policy Editor. This can be applied to the local policy.

Note: if you are trying to override your organizations group policy, you can't. As soon as you re-authenticate to the domain, the domain or OU Group Policy will rewrite the registry setting. But if the TaskManager was accidently disabled or you need to control this item for a set of standalone boxes this is for you:

Click Start
Click Run
Enter gpedit.msc in the Open box and click OK
In the Group Policy settings window
Select User Configuration
Select Administrative Templates
Select System
Select Ctrl+Alt+Delete options
Select Remove Task Manager

Double-click the Remove Task Manager option And as I mentioned above, since the policy is Remove Task Manager, by disabling the policy, you are enabling the Task Manager.

5 tips to improve physical access security

There is a wealth of information on how to secure your computer against remote intrusions and infections by malicious mobile code on the Internet, and such topics are central to a lot of formal IT security education. Physically securing a computer against theft is generally pretty easy, if you’re smart about it. A more problematic area of security for your IT resources is that of securing them against unauthorized use when someone has physical access to them.
Whole books — whole libraries, even — of discussion of this subject have been written, for purposes of controlling how computers are used on a corporate network, monitoring their use, and even dealing with the sticky problem of policy enforcement. An oft-neglected matter is that of just ensuring that other people do not have unauthorized access when you leave your computer unattended for a few minutes.

There may be any number of reasons this facet of computer security is important to you, in particular. For instance:
While you may think the workplace is a safe place to leave your computer unattended, even when there isn’t strict employee monitoring going on, it’s always worth ensuring you don’t fall prey to the malicious behavior of disgruntled employees or unexpected visitors.
In a workplace where employee behavior is audited based on activity under login name, it may be desirable to ensure that nobody else can do something under your user account while you’re on break.

If you spend a fair bit of time in coffee shops and other public places, working (or playing) with your laptop, you may find yourself certain it won’t be stolen but not so certain that someone won’t do something with it while you’re away. Even normally trustworthy friends with whom you leave your laptop might have a mischievous streak and decide to change your GUI configuration to use a painful color scheme, such as MS Windows’ Hot Dog Stand theme.
While one would hope you do not leave your computer so unprotected as to get it stolen, protecting your sensitive data against recovery by thieves can be very important.
Let’s assume you use the obvious, high-tech measures that are all the rage these days — e.g. full disk encryption, strong password security for OS login, and individual file encryption where warranted. More immediate concerns, of the sort that can help protect you when you leave your laptop to go to the bathroom or when you leave your desk for an IT department meeting, should still be addressed. Five simple measures that can be taken to improve the security of your system against those who have direct access follow:

1. Set a BIOS/CMOS password.
On one hand, setting a BIOS/CMOS password for a computer doesn’t really provide much in the way of “real” security. If someone doesn’t mind taking apart the computer and pulling the CMOS battery off the motherboard, it’s easy to bypass a BIOS/CMOS password. On the other hand, if someone is only going to have access to your computer for a few minutes while you’re away from it, that can prove a significant stumbling block — a problem that could slow down someone’s ability to get in and out before you get back. Since the BIOS/CMOS password would then be cleared, rather than simply cracked, you would also have a pretty good indicator that someone was trying to get unauthorized access to what’s on your computer.

2. Disable booting from external media.
With the ability to carry around an operating system on a floppy disk, a bootable CD or DVD, or even a USB flash media storage device, any number of security cracking tools can be brought to bear very quickly by simply inserting such bootable media into the appropriate drive, tray, et cetera, and rebooting the machine. If you have all boot options other than your hard drive disabled in the CMOS settings, though, those settings would have to be changed to allow someone to boot up another OS with a bunch of automated security cracking tools. If you have a BIOS/CMOS password set, the would be security cracker will not be able to change those boot device settings without clearing CMOS settings, as I described above.

3. Always lock your screen and/or log out when away from the computer.
Leaving your computer running with everything still active and receptive to user input while you’re away is the quickest and easiest way to give unauthorized people access to a lot of stuff on your computer. Full disk encryption doesn’t do much good if you leave it running with the disk decrypted for use so any old joker can come along and sit down in front of it, pretending to be you long enough to copy sensitive files to a USB flash media storage device or — perhaps even easier — email them to himself via GMail or Yahoo! Mail. Use your system’s screen locking functionality to protect against this kind of physical access, such as a screen saver that won’t deactivate without a password, or just log out of everything so anyone that wants access has to log in again.
Some GUI environments don’t include this kind of functionality by default, of course, including my own window managers of choice (AHWM and wmii). Users of lightweight GUI environments like these are not without options, however; I use a tiny little screen locking utility called slock to get the screen locking capabilities I need, and it works brilliantly. If you use a tool like that, however, make sure you remember to log out of your TTY consoles as well, because slock and its kin will only lock the X session — not the TTY consoles.

4. Only use secure memory for encryption tools.
As I explained in the “insecure memory” FAQ, encryption tools that take a password have to be able to store that password somewhere when you use it — and if your computer’s RAM is being taxed by heavy usage, some of what’s in memory might get swapped to disk (i.e., stored in the page file, in Microsoft terminology). If that happens, it becomes difficult to ensure that the data will not still be there when you shut down your computer, sitting inertly on the hard drive, waiting for someone to come along with a simple forensic tool to recover your encryption password.
The key is to make sure you’re using secure memory — basically, memory that is managed differently from the way RAM usage is normally managed by the OS, so that the contents of the memory locations set aside for a given application will never be swapped to disk. See the “insecure memory” FAQ for more details. While you’re at it, make sure you don’t leave a computer unattended where others can get at it for a few minutes after your first shut it down, because even data stored only in RAM can sometimes be recovered if a malicious security cracker with physical access to the machine is very quick about it.

5. Set speedbumps in the way of unauthorized password recovery.
Most modern, general purpose OSes these days offer options for recovering from varying degrees of system corruption and user error. Some of these can even provide a means of recovering or resetting a lost administrator password — which then, in theory, gives one almost unfettered access to everything on the system (barring need for additional passwords in the case of encrypted files and the like). One of the easiest ways to accomplish this is with alternate operating modes, such as MS Windows Safe Mode and Unix (and Linux) Single-User Mode.
Safe Mode can ensure that a lot of security software is disabled on MS Windows, including some logging tools and encryption utilities that you may use. A stumbling block in the way of the would-be security cracker, however, is to simply make sure you give the Administrator account a password; by default, MS Windows XP (for instance) creates the Administrator account without a password, which is a terrible lapse in good security practice. Rectify that problem, and Safe Mode will be inaccessible to the casual, “drive-by” unauthorized person who wants access to your system. If such a person has one of the dozens of simple MS Windows password recovery tools available for free download from the Internet, though, this won’t be much of a barrier to entry.
Unix and Unix-like systems, on the other hand, tend to be more difficult to crack when it comes to circumventing security on the root password. Such OSes do have a Single-User Mode that can provide root-level access to much of the system if you don’t have it set up properly. It is possible to change configuration for TTY consoles to deny root access, to solve this problem, though. How this is accomplished will vary from system to system. For instance, on FreeBSD and Apple MacOS X the configuration options you need are in the /etc/ttys file, and on many Linux systems they’re in the /etc/securetty file.

Wrapping Up
Obviously, this article isn’t intended to provide you with better perimeter security in your enterprise network, or to teach you how to perform a site survey or penetration test. It is, however, meant to remind you about the sort of security measures that we should all employ on an individual basis, no matter what the context — work, home, school, et cetera — in one of the most overlooked, but most common, cases of vulnerability created by user carelessness. It isn’t comprehensive (it’s only a five item list, after all), but it gives you a place to start.
Often, the weakest link in a chain of security is the user. Don’t let that be true of you.

The 10 most useful Linux commands

I understand that many of you don’t want to use the command line in Linux (or in any operating system, for that matter). But the truth is, to be a good administrator, you have to know the command line. Why? Well, with Windows there are times when the command line is the only thing that can save your skin. With Linux, the command line is vast, reliable, flexible, fast… I could go on and on.
And of the 2,119 possible commands from the /usr/bin directory (in Mandriva Spring 2008) and the 388 possible commands from /usr/sbin/, a few are indispensable. Here are 10 of them that might make your Linux admin life — or your introduction to Linux — a whole lot simpler.
I could make this easy and go with the most used commands (cd, ls, rm, etc — okay, etc isn’t a command, but you get the point). But instead, I am going to go with the most useful commands, and I’ll keep it as distribution-neutral as I can.

#1: top
I figured it was fitting to put the top command at the top. Although top is actually responsible for listing currently running tasks, it is also the first command Linux users turn to when they need to know what is using their memory (or even how much memory a system has). I often leave the top tool running on my desktop so I can keep track of what is going on at all times. Sometimes, I will even open up a terminal (usually aterm), place the window where I want it, and then hide the border of the window. Without a border, the terminal can’t be moved, so I always have quick access to the information I need.
Top is a real-time reporting system, so as a process changes, it will immediately be reflected in the terminal window. Top does have some helpful arguments (such as the -p argument, which will have top monitor only user-specified PIDs), but running default, top will give you all the information you need on running tasks.
#2: ln
To many administrators, links are an invaluable tool that not only make users lives simpler but also drastically reduce disk space usage. If you are unaware of how links can help you, let me pose this simple scenario: You have a number of users who have to access a large directory (filled with large files) on a drive throughout the day. The users are all on the same system, and you don’t want to have to copy the entire directory to each user’s ~/ directory. Instead, just create a link in each user’s ~/ directory to the target. You won’t consume space, and the users will have quick access. Of course when spanning drives, you will have to use symlinks. Another outstanding use for links is linking various directories to the Apache doc root directory. Not only can this save space, it’s often advantageous from a security standpoint.
#3: tar/zip/gzip
Tar, zip, and gzip are archival/compression tools that make your administrator life far easier. I bundle these together because the tools can handle similar tasks yet do so with distinct differences (just not different enough to warrant their own entry in this article). Without these tools, installing from source would be less than easy. Without tar/zip/gzip, creating backups would require more space than you might often have.
One of the least used (but often most handy) features of these tools is the ability to extract single files from an archive. Now zip and gzip handle this more easily than tar. With tar, to extract a single file, you have to know the exact size of the file to be extracted. One area where tar/zip/gzip make administration simple is in creating shells scripts that automate a backup process. All three tools can be used with shell scripts and are, hands down, the best, most reliable backup tools you will find.
#4: nano, vi, emacs
I wasn’t about to place just one text editor here, for fear of stoking the fires of the “vi vs. emacs” war. To top that off, I figured it was best to throw my favorite editor — nano — into the mix. Many people would argue that these aren’t so much commands as they are full-blown applications. But all these tools are used within the command line, so I call them “commands.” Without a good text editor, administering a Linux machine can become problematic.
Imagine having to attempt to edit /etc/fstab or /etc/samba/smb.conf with OpenOffice. Some might say this shouldn’t be a problem, but OpenOffice tends to add hidden end-of-line characters to text files, which can really fubar a configuration file. For the editing of configuration or bash files, the only way to go is with an editor such as nano, vi, or emacs.
#5: grep
Many people overlook this amazingly useful tool. Grep prints lines that match a user-specified pattern. Say, for instance, that you are looking at an httpd.conf file that’s more than 1,000 lines long, and you are searching for the “AccessFileName .htaccess” entry. You could comb through that file only to come across the entry at line 429, or you can issue the command grep -n “AccessFileName .htaccess” /etc/httpd/conf/http.conf. Upon issuing this command you will be returned “439:AccessFileName .htaccess” which tells you the entry you are looking for is on, surprise of all surprises, line 439.
The grep command is also useful for piping other commands to. An example of this is using grep with the ps command (which takes a snapshot of current running processes.) Suppose you want to know the PID of the currently crashed Firefox browser. You could issue ps aux and search through the entire output for the Firefox entry. Or you could issue the command ps auxgrep firefox, at which point you might see something like this:jlwallen 17475 0.0 0.1 3604 1180 ?

Ss 10:54 0:00 /bin/sh /home/jwallen/firefox/firefoxjlwallen 17478 0.0 0.1 3660 1276 ? S 10:54 0:00 /bin/sh /home/jlwallen/firefox/run-mozilla.sh /home/jlwallen/firefox/firefox-bin
jlwallen 17484 11.0 10.7 227504 97104 ? Sl 10:54 11:50 /home/jlwallenfirefox/firefox-bin
jlwallen 17987 0.0 0.0 3112 736 pts/0 R+ 12:42 0:00 grep --color firefox
Now you know the PIDs of every Firefox command running.
#6: chmod
Permissions anyone? Linux administration and security would be a tough job without the help of chmod. Imagine not being able to make a shell script executable with chmod u+x filename. Of course it’s not just about making a file executable. Many Web tools require certain permissions before they will even install. To this end, the command chmod -R 666 DIRECTORY/ is one very misused command. Many new users, when faced with permissions issues trying to install an application, will jump immediately to 666 instead of figuring out exactly what permissions a directory or folder should have.
Even though this tool is critical for administration, it should be studied before jumping in blindly. Make sure you understand the ins and outs of chmod before using it at will. Remember w=write, r=read, and x=execute. Also remember UGO or User, Group, and Other. UGO is a simple way to remember which permissions belong to whom. So permission rw- rw- rw- means User, Group, and Other all have read and write permissions. It is always best to keep Other highly restricted in their permissions.
#7: dmesg
Call me old-school if you want, but any time I plug a device into a Linux machine, the first thing I do is run the dmesg command. This command displays the messages from the kernel buffer. So, yeah, this is an important one. There is a lot of information to be garnered from the dmesg command. You can find out system architecture, gpu, network device, kernel boot options used, RAM totals, etc.
A nice trick is to pipe dmesg to tail to watch any message that comes to dmesg. To do this, issue the command dmesg tail -f and the last few lines of dmesg will remain in your terminal. Every time a new entry arrives it will be at the bottom of the “tail.” Keep this window open when doing heavy duty system administration or debugging a system.
#8: kill/killall
One of the greatest benefits of Linux is its stability. But that stability doesn’t always apply to applications outside the kernel. Some applications can actually lock up. And when they do, you want to be able to get rid of them. The quickest way to get rid of locked up applications is with the kill/killall command. The difference between the two commands is that kill requires the PID (process ID number) and killall requires only the executable name.
Let’s say Firefox has locked up. To kill it with the kill command you would first need to locate the PID using the command ps auxgrep firefox command. Once you got the PID, you would issue kill PID (Where PID is the actual PID number). If you didn’t want to go through finding out the PID, you could issue the command killall firefox (although in some instances it will require killall firefox-bin). Of course, kill/killall do not apply (nor should apply) to daemons like Apache, Samba, etc.
#9: man
How many times have you seen “RTFM”? Many would say that acronym stands for “Read the Fine* Manual” (*This word is open for variation not suitable for publication.) In my opinion, it stands for “Read the Fine Manpage.” Manpages are there for a reason — to help you understand how to use a command. Manpages are generally written with the same format, so once you gain an understanding of the format, you will be able to read (and understand) them all. And don’t underestimate the value of the manpage. Even if you can’t completely grasp the information given, you can always scroll down to find out what each command argument does. And the best part of using manpages is that when someone says “RTFM” you can say I have “RTFMd.”
#10: mount/umount
Without these two commands, using removable media or adding external drives wouldn’t happen. The mount/umount command is used to mount a drive (often labeled like /dev/sda) to a directory in the Linux file structure. Both mount and umount take advantage of the /etc/fstab file, which makes using mount/umount much easier. For instance, if there is an entry in the /etc/fstab file for /dev/sda1 that maps it to /data, that drive can be mounted with the command mount /data. Typically mount/umount must have root privileges (unless fstab has an entry allowing standard users to mount and unmount the device). You can also issue the mount command without arguments and you will see all drives that are currently mounted and where they’re mapped to (as well as the type of file system and the permissions).
Can’t live without ‘em
These 10 Linux commands make Linux administration possible. There are other helpful commands, as well as commands that are used a lot more often than these. But the commands outlined here fall into the necessity category. I don’t know about you, but I don’t go a day without using at least half of them.

IPv6: What is Internet Protocol?

Internet Protocol (IP) is one of many communications protocols that compose the Internet Protocol Suite(IPS) and is arguably the most important protocol. Experts usually describe IPS as a stack of protocols that convert application information (like e-mail or Web traffic) into digital packets capable of traversing networks, including the Internet.
Specifically, IP is responsible for transmitting the digital packets from a source host to a destination host over a network connection. The Request for Comment (RFC) 791 is the last word about IP and provides the following definition:
“The internet protocol is specifically limited in scope to provide the functions necessary to deliver a package of bits (an internet datagram) from a source to a destination over an interconnected system of networks. There are no mechanisms to augment end-to-end data reliability, flow control, sequencing, or other services commonly found in host-to-host protocols. The internet protocol can capitalize on the services of its supporting networks to provide various types and qualities of service.”
Packets and datagrams: Is there a difference?
When discussing IP, many people (including me) interchange the terms packet and datagram as both terms have similar (identical, some argue) definitions. RFC 1594 defines a datagram/packet as:
“A self-contained, independent entity of data carrying sufficient information to be routed from the source to the destination computer without reliance on earlier exchanges between the source and destination computer and the transporting network.”
Since they’re the same, why worry about definitions? Well, sometimes experts define packets differently from datagrams, and that’s when it gets confusing. They use the term packet when discussing reliable data transmission protocols such as TCP/IP, and then use the term datagram when talking about best-effort data transmission protocols like UDP. For our discussion of IP, it doesn’t matter which term is used, but I’d like to stick with datagram (you’ll see why in a moment).
IP attributes
IP has several attributes that define how data is transmitted, and they’re important regardless of whether we’re discussing IPv4 or IPv6. So, let’s take a look at them:
Host addressing: IP defines the addressing scheme for each host on the network and uses the addresses to facilitate datagram delivery.
Protocol independence: IP by design is able to work with any type of underlying network protocol using protocol stack technology.
Connectionless delivery: IP does not set up a relationship between the sending host and the receiving host. The sending host just creates datagrams and sends them on their way.
Best-effort delivery: IP tries its best to ensure that the receiving host actually gets the datagrams addressed to it, but there are no guarantees.
No provision for delivery acknowledgments: The receiving host does not acknowledge the fact that it indeed did receive the data addressed to it.
One wonders how IP datagrams get where they’re supposed to, when the last three attributes create less than a perfect environment. Why leave those features out of the protocol? The simple reason is better performance. Using established connections, error-checking, and guaranteed delivery require additional processing power and network bandwidth. So if the datagram being transmitted does not require certain attributes, it’s better they aren’t used. Besides, the people who developed IP are a smart bunch, designing a more efficient approach that uses protocol stacking.
Protocol or TCP/IP stack
If you recall, I mentioned something called a protocol stack (officially TCP/IP) earlier. If the type of transmitted data (such as e-mail) requires guaranteed delivery, receipt acknowledgment, or an official connection handshake, the information is appended earlier in the datagram-building process, or what is called “further up the stack.” It turns out to be good solution, especially since it conserves network resources.
On a side note, I debated whether to include information about the TCP/IP stack in this discussion, as we’re supposed to be focused on IP. The only problem is that it’s very hard to divorce TCP from IP. Especially since a large percentage of datagrams include TCP information.
TCP/IP Guide has an excellent explanation of what a TCP/IP stack is and how it works. The process of encapsulation (ultimately why I included this information) also takes place in the TCP/IP stack. Encapsulation is where the next protocol in the stack encapsulates the datagram, giving it additional information that’s required, so the packet can successfully reach its destination.

10 mistakes new Linux administrators make

If you’re new to Linux, a few common mistakes are likely to get you into trouble. Learn about them up front so you can avoid major problems as you become increasingly Linux-savvy.
For many, migrating to Linux is a rite of passage that equates to a thing of joy. For others, it’s a nightmare waiting to happen. It’s wonderful when it’s the former; it’s a real show stopper when it’s the latter. But that nightmare doesn’t have to happen, especially when you know, first hand, the most common mistakes new Linux administrators make. This article will help you avoid those mistakes by laying out the most typical Linux missteps.

#1: Installing applications from various types
This might not seem like such a bad idea at first. You are running Ubuntu so you know the package management system uses .deb packages. But there are a number of applications that you find only in source form. No big deal right? They install, they work. Why shouldn’t you? Simple, your package management system can’t keep track of what you have installed if it’s installed from source. So what happens when package A (that you installed from source) depends upon package B (that was installed from a .deb binary) and package B is upgraded from the update manager? Package A might still work or it might not. But if both package A and B are installed from .debs, the chances of them both working are far higher. Also, updating packages is much easier when all packages are from the same binary type.
#2: Neglecting updates
Okay, this one doesn’t point out Linux as much as it does poor administration skills. But many admins get Linux up and running and think they have to do nothing more. It’s solid, it’s secure, it works. Well, new updates can patch new exploits. Keeping up with your updates can make the difference between a compromised system and a secure one. And just because you can rest on the security of Linux doesn’t mean you should. For security, for new features, for stability — the same reasons we have all grown accustomed to updating with Windows — you should always keep up with your Linux updates.
#3: Poor root password choice
Okay, repeat after me: “The root password is the key to the kingdom.” So why would you make the key to the kingdom simple to crack? Sure, make your standard user password something you can easily remember and/or type. But that root password — you know, the one that’s protecting your enterprise database server — give that a much higher difficulty level. Make that password one you might have to store, encrypted, on a USB key, requiring you to slide that USB key into the machine, mount it, decrypt the password, and use it.
#4: Avoiding the command line
No one wants to have to memorize a bunch of commands. And for the most part, the GUI takes care of a vast majority of them. But there are times when the command line is easier, faster, more secure, and more reliable. Avoiding the command line should be considered a cardinal sin of Linux administration. You should at least have a solid understanding of how the command line works and a small arsenal of commands you can use without having to RTFM. With a small selection of command-line tools on top of the GUI tools, you should be ready for just about anything.
#5: Not keeping a working kernel installed
Let’s face it, you don’t need 12 kernels installed on one machine. But you do need to update your kernel, and the update process doesn’t delete previous kernels. What do you do? You keep at least the most recently working kernel at all times. Let’s say you have 2.6.22 as your current working kernel and 2.6.20 as your backup. If you update to 2.6.26 and all is working well, you can remove 2.6.20. If you use an rpm-based system, you can use this method to remove the old kernels: rpm -qa grep -i kernel followed by rpm-e kernel-{VERSION}.
#6: Not backing up critical configuration files
How many times have you upgraded X11 only to find the new version fubar’d your xorg.conf file to the point where you can no longer use X? It used to happen to me a lot when I was new to Linux. But now, anytime X is going to be updated I always back up /etc/X11/xorg.conf in case the upgrade goes bad. Sure, an X update tries to back up xorg.conf, but it does so within the /etc/X11 directory. And even though this often works seamlessly, you are better off keeping that backup under your own control. I always back up xorg.conf to the /root directory so I know only the root user can even access it. Better safe than sorry. This applies to other critical backups, such as Samba, Apache, and MySQL, too.
#7: Booting a server to X
When a machine is a dedicated server, you might want to have X installed so some administration tasks are easier. But this doesn’t mean you should have that server boot to X. This will waste precious memory and CPU cycles. Instead, stop the boot process at runlevel 3 so you are left at the command line. Not only will this leave all of your resources to the servers, it will also keep prying eyes out of your machine (unless they know the command line and passwords to log in). To log into X, you will simply have to log in and run the command startx to bring up your desktop.
#8: Not understanding permissions
Permissions can make your life really easy, but if done poorly, can make life really easy for hackers. The simplest way to handle permissions is using the rwx method. Here’s what they mean: r=read, w=write, x=execute. Say you want a user to be able to read a file but not write to a file. To do this, you would issue chmod u+r,u-wx filename. What often happens is that a new user sees an error saying they do not have permission to use a file, so they hit the file with something akin to chmod 777 filename to avoid the problem. But this can actually cause more problems because it gives the file executable privileges. Remember this: 777 gives a file rwx permissions to all users (root, group, and other), 666 gives the file rw privileges to all users, 555 gives the file rx permissions to all users, 444 gives r privileges to all users, 333 gives wx privileges to all users, 222 gives w privileges to all users, 111 gives x privileges to all users, and 000 gives no privileges to all users.
#9: Logging in as root user
I can’t stress this enough. Do NOT log in as root. If you need root privileges to execute or configure an application, su to root in a standard user account. Why is logging in as root bad? Well, when you log on as a standard user, all running X applications still have access only to the system limited to that user. If you log in as root, X has all root permissions. This can cause two problems: 1) if you make a big mistake via a GUI, that mistake can be catastrophic to the system and 2) with X running as root that makes your system more vulnerable.
#10: Ignoring log files
There is a reason /var/log exists. It is a single location for all log files. This makes it simple to remember where you first need to look when there is a problem. Possible security issue? Check /var/log/secure. One of the very first places I look is /var/log/messages. This log file is the common log file where all generic errors and such are logged to. In this file you will get messages about networking, media changes, etc. When administering a machine you can always use a third-party application such as logwatch that can create various reports for you based on your /var/log files.
Sidestep the problems
These 10 mistakes are pretty common among new Linux administrators. Avoiding the pitfalls will take you through the Linux migration rite of passage faster, and you will come out on the other side a much better administrator.

Computer beep using API

Imports System.Runtime.InteropServices


'put this code just below the class level

CharSet:=CharSet.Unicode, ExactSpelling:=True, _
CallingConvention:=CallingConvention.StdCall)> _
Public Shared Function _
aBeep(ByVal dwFreq As Integer, ByVal dwDuration As Integer) _
As Boolean
' Leave the body of the function empty.
End Function


'now make a call to the Function with Frequency
'and Duration parameters. Can be used anywhere you want
'to alert the user.

aBeep(1000, 500)
aBeep(2000, 1000)

Multi-threading with background worker

Enables to take advantage of multi-possessors. Create a new class file
Write your code to the BackgroundWorker1_DoWork protsedure.
declare it like:
Dim x As New tee
start it like:
x.startBackgroundTask()


Imports System.Threading
Public Class tee
Private Sub BackgroundWorker1_DoWork(ByVal sender As Object, _
ByVal e As System.ComponentModel.DoWorkEventArgs) Handles BackgroundWorker1.DoWork
' Add your code here

End Sub
Private EndedAt As String, StartedAt As String
Private tegutseb As Boolean = False, Notifieonend As Boolean
Private WithEvents BackgroundWorker1 As New System.ComponentModel.BackgroundWorker
Public Sub startBackgroundTask() ' This will start the backgroundworker
tegutseb = True
StartedAt = "Started : " & Format(Now, "h:mm:ss") & "." & Now.Millisecond
BackgroundWorker1.RunWorkerAsync()
End Sub
Private Sub BackgroundWorker1_RunWorkerCompleted(ByVal sender As Object, _
ByVal e As System.ComponentModel.RunWorkerCompletedEventArgs) _
Handles BackgroundWorker1.RunWorkerCompleted ' Compleated
tegutseb = False
EndedAt = "Ended : " & Format(Now, "h:mm:ss") & "." & Now.Millisecond
If Notifieonend = True Then MsgBox(Timestamp, , "Protsess Done")
End Sub
Public ReadOnly Property Timestamp() As String
Get
Return StartedAt & Chr(13) & EndedAt
End Get
End Property
Public ReadOnly Property IsWorking() As Boolean
Get
Return tegutseb
End Get
End Property
Public Property Notifie_on_end() As Boolean
Get
Return Notifieonend
End Get
Set(ByVal value As Boolean)
Notifieonend = value
End Set
End Property
End Class

Get connection string from app.config

Store your connection stirng in the app.config like this:
< name="YourName" connectionstring="Persist Security Info=False;Data Source=Database_Name;Initial Catalog=Table_Name;Integrated Security=SSPI;Trusted_Connection=TRUE;Application Name=Application_Name" providername="System.Data.SqlClient"> < /connectionStrings >

To use:
Dim SqlConnection As New SqlConnection("GetConnectionString("YourName"))
Snippet

Public Shared Function GetConnectionString(ByVal strConnection As String) As String
'Declare a string to hold the connection string
Dim sReturn As New String("")
'Check to see if they provided a connection string name
If Not String.IsNullOrEmpty(strConnection) Then
'Retrieve the connection string fromt he app.config
sReturn = ConfigurationManager. & _
ConnectionStrings(strConnection).ConnectionString
Else
'Since they didnt provide the name of the connection string
'just grab the default on from app.config
sReturn = ConfigurationManager. & _
ConnectionStrings("YourConnectionString").ConnectionString
End If
'Return the connection string to the calling method
Return sReturn
End Function

Use KeyChar to limit the characters that can be entered into a Text Box

A method to allow the user to enter only certain specific characters into a text box and ignore any other key. In this example only numbers, the Backspace key and the period will be allowed, everything else is ignored.

'allow only numbers, the Backspace key and the period

If (e.KeyChar < "0" OrElse e.KeyChar > "9") _
AndAlso e.KeyChar <> ControlChars.Back AndAlso e.KeyChar <> "." Then
'cancel keys
e.Handled = True
End If

Open a Folder Browse Dialog window Using Vb.net

' First create a FolderBrowserDialog object
Dim FolderBrowserDialog1 As New FolderBrowserDialog

' Then use the following code to create the Dialog window
' Change the .SelectedPath property to the default location
With FolderBrowserDialog1
' Desktop is the root folder in the dialog.
.RootFolder = Environment.SpecialFolder.Desktop
' Select the C:\Windows directory on entry.
.SelectedPath = "c:\windows"
' Prompt the user with a custom message.
.Description = "Select the source directory"
If .ShowDialog = DialogResult.OK Then
' Display the selected folder if the user clicked on the OK button.
MessageBox.Show(.SelectedPath)
End If
End With

Resizes Image to desired size Using vb.net

'following code resizes picture to fit

Dim bm As New Bitmap(PictureBox1.Image)
Dim x As Int32 'variable for new width size
Dim y As Int32 'variable for new height size

Dim width As Integer = Val(x) 'image width.

Dim height As Integer = Val(y) 'image height

Dim thumb As New Bitmap(width, height)

Dim g As Graphics = Graphics.FromImage(thumb)

g.InterpolationMode = Drawing2D.InterpolationMode.HighQualityBicubic

g.DrawImage(bm, New Rectangle(0, 0, width, height), New Rectangle(0, 0, bm.Width, _
bm.Height), GraphicsUnit.Pixel)

g.Dispose()


'image path. better to make this dynamic. I am hardcoding a path just for example sake
thumb.Save("C:\image.jpg", _
System.Drawing.Imaging.ImageFormat.jpg) 'can use any image format

bm.Dispose()

thumb.Dispose()

Me.Close() 'exit app

Running an external executable file Using Vb.net

'make a call to your application or file by giving Process.Start
'the full path to your file including name and extension.

'will open a Word document called myfile.xls with MS Excel
Process.Start("c:\myTestFolder\myfile.xls")

'will run an executable file called myfile.exe
Process.Start("c:\myTestFolder\myfile.exe")

'will open a blank notepad
Process.Start("Notepad.exe")

Monday, December 8, 2008

Useful Excel Visual Basic Macros for Programmers

Excel VB Macro is exactly useful because it can do things in your workbooks for you, like manipulating cells and worksheets. Excel Visual Basic (VB) gives us a number of methods to interact with worksheets and cells, and I will cover some of the more intuitive methods here.
One of my favorite ways to interact with cells in a worksheet is rather direct. I like it because it is easy to double-check and conceptualize.
Let's look at the basic statement that is the second line in the following very short Excel macro.
Sub put_value_in_Cell
Worksheets("Sheet2").Cells(3, 7).Value = 1
End Sub

This statement assigns the Cell located at (3,7) in Sheet2 the value 1. That literally means that is you go to that sheet1 in your active Excel Workbook, you'll see a 1 in the cell located at (3,7).
What is this (3,7)? We are using index numbers for the column instead of the column-letters you might be accustomed to. Note: Index for cells begins at 1, not zero. E.g. there isn'tt a row zero. I'll re-type the last macro, but this time using variables that could make it more clear.


Sub put_value_in_Cell ()
my_row = 3 '
my_column = 7
my_workSheet = "Sheet2"
Worksheets(my_workSheet).Cells(my_row, my_column).Value = 1
End Sub

Using these index numbers inside loops can really be useful to get stuff done. For example, to go down the first column of our Excel worksheet and put a zero into the first 10 cells, you could do something like this:

Sub an_example()
For row_counter = 1 to 10
Worksheets("Sheet1").Cells(row_counter, 1).Value = 0
Next row_counter
End Sub

or generally...

Sub an_example()
For row_counter = 1 to 10
'Whatever you want your macro to do...Check or assign values, etc.
Next row_counter
End Sub

To compare the values of two cells, you can do this:

Sub an_example2()
For row_counter = 1 to 10
If Worksheets("Sheet1").Cells(1, 1).Value = _
Worksheets("Sheet1").Cells(1, 2).Value Then
'Whatever you want done if the cells are equal in value.
'Note the _ in the If statement is there because it allows
'a statement to span multiple lines so we can see it all once
without off the page like this line.
End if
Next row_counter
End Sub

To compare the values of two cells, you can do this:

It does not stop with the values of cells either. You may have noticed earlier that we used .Value after specifying a cell. However there are other properties and methods of cells that Excel VB provides that are really useful. For example the following statement that could be in a macro checks whether the cell contains a formula:

If Worksheets("Sheet1").Cells(theRow, theCol).HasFormula Then

There really are quite a number of things that may be accomplished with these techniques.

Write on excel sheets Using Visual basic 6.0

if you want to export data to Excel you can do two things :
1. load Excel application
2. using Recordset to import the data (you must have an excel file to do this)first of all you must add 'Microsoft Excel [version] Object Library' reference to your program you can do this by choose menu Project >> References ..
1. Load Excel App (you must have Microsoft Excell installed)
a. declare variable as Excel.Application and Excel.Workbookex.
dim objExcel as Excel.Application,
objBook as Excel.WorkBookb. open Excel application
On Error Resume Next
Set objExcel = GetObject(, "Excel.Application") 'if excel already open you can use
GetObject
If Err.Number ThenErr.ClearSet objExcel = CreateObject("Excel.Application") 'or CreateObject to open new Excel Applicationc. create new workbook and new sheetSet
objWorkbook = objExcel.Workbooks.AddobjWorkbook.ActiveSheet.Cells(1, 1) = value ' cell(1,1) means cell A1 ;
value is the value you want to fillnote :
you can also use range to merge cell and fill the valueex.objWorkbook.ActiveSheet.Range("A4:A6").Merge
objWorkbook.ActiveSheet.Range("A4:A6").Value = value
2. Using Recordset (you musn't have Microsoft Excell installed but you must have an excel file with the first row filled with any value --> for a column header; otherwise this 'Insert Into' statement wont work)
a. create and open a connection (i'm using ADODim Conn As ADODB.ConnectionSet Conn = New ADODB.ConnectionConn.Open "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=FileName.xls;Extended Properties=Excel 4.0;"b. fill the value to excell sheetsConn.Execute "Insert Into [Sheet1$] (A1) VALUES ('value') " ' [Sheet1$] is an active sheet

Sunday, December 7, 2008

Use ONLINE and SORT_IN_TEMPDB Effectively

The ONLINE and SORT_IN_TEMPDB index options affect both the temporary space requirements and performance of the index create or rebuild operation. The advantages and disadvantages of each are covered in this section.
When considering the ONLINE option, you must weigh the need for a performant index operation versus the need for concurrent user access to the underlying data.
· To achieve the best performance, that is, the least time to create or rebuild an index, set ONLINE to OFF. However, this prevents all user access to the underlying table for the duration of the index create or rebuild operation.
· To achieve the best concurrency, that is, the least impact on other users accessing the table, set ONLINE to ON. However, the index operation will take more time.
You must also take into consideration the extra temporary space requirements of the online operation.
· To use the least amount temporary space while rebuilding a clustered index, set ONLINE to OFF.
· To use the least amount of temporary space while rebuilding a nonclustered index, set ONLINE to ON.
· If there are concurrent user transactions on the table during the online index operation, you must plan for additional space in tempdb for the version store.
For more information, see Determining the Amount of Temporary Space Used in this paper.
As we discussed earlier, when SORT_IN_TEMPDB is set to ON, sort runs and other intermediate tasks are stored in tempdb rather than the user database. Setting this option to ON can have two advantages:
· You can achieve the most contiguous space in the index. When the sort extents are held separately in tempdb, the sequence in which they are freed has no affect on the location of the index extents. Also, when the intermediate sort runs are stored in tempdb instead of the destination filegroup, there is more space available in the destination filegroup. This increases the chance that index extents will be contiguous.

When both SORT_IN_TEMPDB and ONLINE are set to ON, the index transactions are stored in the tempdb transaction log, and the concurrent user transactions are stored in the transaction log of the user database. This allows you to truncate the transaction log of the user database during the index operation if needed. Additionally, if the tempdb log is not on the same disk as the user database log, the two logs are not competing for the same disk space.

Measuring Temporary Disk Space Usage

When the temporary space is used from the tempdb database, you can measure the amount of temporary space used by an index operation by using the dynamic management views provided in SQL Server 2005. There are three views that report the temporary disk space used by any operation in tempdb:
· sys.dm_db_task_space_usage
· sys.dm_db_session_space_usage
· sys.dm_db_file_space_usage
While these views only pertain to the tempdb database, you can set the SORT_IN_TEMPDB option to ON when testing for disk space usage requirements and then plan for the same space allocation in your user database.
The sys.dm_db_task_space_usage dynamic management view provides tempdb usage information for each task. As a task (such as an index rebuild) progresses, you can monitor how much temporary space the task is using. However, as soon as the task completes, the counters in the view are reset to zero. So, unless you happen to query this view just at the moment before the task completes, you can’t get the total amount of tempdb space used by a given task. However, when the task is completed these values are aggregated at the session level and stored in the sys.dm_db_session_space_usage view.
The sys.dm_db_session_space_usage provides tempdb usage information for each session. The easiest way to measure the tempdb space used by a given operation is to query sys.dm_db_session_space_usage for your session before and after the operation. However, there is a catch. The data in sys.dm_db_session_space_usage is not updated until the completion of the batch; therefore, you must execute these statements as three separate batches. Essentially, all you really need is three GO statements, as shown in the following example:
SELECT * FROM sys.dm_db_session_space_usage WHERE session_id = @@spid;
GO

GO
SELECT * FROM sys.dm_db_session_space_usage WHERE session_id = @@spid;

GO
When you query the sys.dm_db_session_space_usage view, pay attention to the following two columns in the result set:
· internal_objects_alloc_page_count: This column represents the space used by the sort runs while creating or rebuilding an index.
· user_objects_alloc_page_count: This column represents the tempdb space used by the temporary mapping index. The temporary mapping index is created only when an online index operation creates, drops, or rebuilds a clustered index.
To measure the size of the version store, you can query the version_store_reserved_page_count column in the sys.dm_db_file_space_usage view. The version store size can also be monitored by using the System Monitor (perfmon) counter Version Store Size (KB) in the Transactions performance object. The amount of space required for the version store depends on the size and duration of the transactions that change the data in the underlying table.

Saturday, December 6, 2008

10 tips for sorting, grouping, and summarizing SQL data

Arranging SQL data that you can effectively analyse requires an understanding of how to use certain SQL clauses and operators. These tips will help you figure out how to build statements that will give you the results you want.
Arranging data in a manner that's meaningful can be a challenge. Sometimes all you need is a simple sort. Often, you need more -- you need groups you can analyse and summarise. Fortunately, SQL offers a number of clauses and operators for sorting, grouping, and summarising. The following tips will help you discern when to sort, when to group, and when and how to summarize. For detailed information on each clause and operator, see Books Online.
#1: Bring order with a sort
More often than not, all your data really needs is a little order. SQL's ORDER BY clause organises data in alphabetic or numeric order. Consequently, similar values sort together in what appear to be groups. However, the apparent groups are a result of the sort; they aren't true groups. ORDER BY displays each record whereas a group may represent multiple records.


#2: Reduce similar values into a group
The biggest difference between sorting and grouping is this: Sorted data displays all the records (within the confines of any limiting criteria) and grouped data doesn't. The GROUP BY clause reduces similar values into one record. For instance, a GROUP BY clause can return a unique list of ZIP codes from a source that repeats those values: SELECT ZIP FROM Customers GROUP BY ZIP
Include only those columns that define the group in both the GROUP BY and SELECT column lists. In other words, the SELECT list must match the GROUP BY list, with one exception: The SELECT list can include aggregate functions. (GROUP BY doesn't allow aggregate functions.)
Keep in mind that GROUP BY won't sort the resulting groups. To arrange groups alphabetically or numerically, add an ORDER BY clause (# 1). In addition, you can't refer to an aliased field in the GROUP BY clause. Group columns must be in the underlying data, but they don't have to appear in the results.


#3: Limit data before it's grouped
You can limit the data that GROUP BY groups by adding a WHERE clause. For instance, the following statement returns a unique list of ZIP codes for just Kentucky customers: SELECT ZIP FROM Customers WHERE State = 'KY' GROUP BY ZIP
It's important to remember that WHERE filters data before the GROUP BY clause evaluates it.
Like GROUP BY, WHERE doesn't support aggregate functions.


#4: Return all groups
When you use WHERE to filter data, the resulting groups display only those records you specify. Data that fits the group's definition but does not meet the clause's conditions won't make it to a group. Include ALL when you want to include all data, regardless of the WHERE condition. For instance, adding ALL to the previous statement returns all of the ZIP groups, not just those in Kentucky: SELECT ZIP FROM Customers WHERE State = 'KY' GROUP BY ALL ZIP
As is, the two clauses are in conflict, and you probably wouldn't use ALL in this way. ALL comes in handy when you use an aggregate to evaluate a column. For example, the following statement counts the number of customers in each Kentucky ZIP, while also displaying other ZIP values: SELECT ZIP, Count(ZIP) AS KYCustomersByZIP FROM Customers WHERE State = 'KY' GROUP BY ALL ZIP
The resulting groups comprise all ZIP values in the underlying data. However, the aggregate column (KYCustomersByZIP) would display 0 for any group other than a Kentucky ZIP.
Remote queries don't support GROUP BY ALL.


#5: Limit data after it's grouped
The WHERE clause (# 3) evaluates data before the GROUP BY clause does. When you want to limit data after it's grouped, use HAVING. Often, the result will be the same whether you use WHERE or HAVING, but it's important to remember that the clauses are not interchangeable. Here's a good guideline to follow when you're in doubt: Use WHERE to filter records; use HAVING to filter groups.
Usually, you'll use HAVING to evaluate a group using an aggregate. For instance, the following statement returns a unique list of ZIP codes, but the list might not include every ZIP code in the underlying data source: SELECT ZIP, Count(ZIP) AS CustomersByZIP FROM Customers GROUP BY ZIP HAVING Count(ZIP) = 1
Only those groups with just one customer make it to the results.


#6: Get a closer look at WHERE and HAVING
If you're still confused about when to use WHERE and when to use HAVING, apply the following guidelines:
WHERE comes before GROUP BY; SQL evaluates the WHERE clause before it groups records.
HAVING comes after GROUP BY; SQL evaluates HAVING after it groups records.


#7: Summarize grouped values with aggregates
Grouping data can help you analyse your data, but sometimes you'll need a bit more information than just the groups themselves. You can add an aggregate function to summarise grouped data. For instance, the following statement displays a subtotal for each order: SELECT OrderID, Sum(Cost * Quantity) AS OrderTotal FROM Orders GROUP BY OrderID
As with any other group, the SELECT and GROUP BY lists must match. Including an aggregate in the SELECT clause is the only exception to this rule.


#8: Summarise the aggregate
You can further summarise data by displaying a subtotal for each group. SQL's ROLLUP operator displays an extra record, a subtotal, for each group. That record is the result of evaluating all the records within each group using an aggregate function. The following statement totals the OrderTotal column for each group: SELECT Customer, OrderNumber, Sum(Cost * Quantity) AS OrderTotal FROM Orders GROUP BY Customer, OrderNumber WITH ROLLUP
The ROLLUP row for a group with two OrderTotal values of 20 and 25 would display an OrderTotal of 45. The first record in a ROLLUP result is unique because it evaluates all of the group records. That value is a grand total for the entire recordset.
ROLLUP doesn't support DISTINCT in aggregate functions or the GROUP BY ALL clause.


#9: Summarise each column
The CUBE operator goes a step further than ROLLUP by returning totals for each value in each group. The results are similar to ROLLUP, but CUBE includes an additional record for each column in the group. The following statement displays a subtotal for each group and an additional total for each customer: SELECT Customer, OrderNumber, Sum(Cost * Quantity) AS OrderTotal FROM Orders GROUP BY Customer, OrderNumber WITH CUBE
CUBE gives the most comprehensive summarisation. It not only does the work of both the aggregate and ROLLUP, but also evaluates the other columns that define the group. In other words, CUBE summarises every possible column combination.
CUBE doesn't support GROUP BY ALL.


#10: Bring order to summaries
When the results of a CUBE are confusing (and they usually are), add the GROUPING function as follows: SELECT GROUPING(Customer), OrderNumber, Sum(Cost * Quantity) AS OrderTotal FROM Orders GROUP BY Customer, OrderNumber WITH CUBE
The results include two additional values for each row:
The value 1 indicates that the value to the left is a summary value--the result of the ROLLUP or CUBE operator.
The value 0 indicates that the value to the left is a detail record produced by the original GROUP BY clause.

ORACLE8 AND THE WINDOWS NT OPERATING SYSTEM

ORACLE8 AND THE WINDOWS NT OPERATING SYSTEM
The Oracle8 RDBMS for NT is written using Microsoft’s 32-bit API. By using the Microsoft 32-bit API the Oracle8 RDBMS has been tightly integrated with the underlying hardware. Oracle8’s architecture for Microsoft Windows NT has been implemented as a single multithreaded process to conform with the Windows NT memory model.Under the Windows NT operating system a process represents a logical unit of work or job that the operating system is to perform. A thread is one of many subtasks that are required to accomplish the job. The components of a thread include:
A unique identifier called a client ID.
The content of a set of registers that represent the state of the processor.
A stack for when the thread is running in user mode and a stack for when the thread is running in kernel mode.
The thread resides within the process’s virtual address space. When more then one thread exists in the same process, the threads share the address space of all the process’s resources. The NT kernel schedules processes thread(s) for execution. All processes running under Windows NT must have at least one thread before the process can be executed.Unlike Oracle8 for UNIX, Oracle8 for NT uses a single process with multiple threads, thereby sharing memory in a single address space. The database uses the operating system facility for preemptive scheduling and load balancing across multiple CPUs.The Oracle instance on Windows NT consists of a memory segment and a number of background threads. By default, the Oracle8 server and its associated background threads run in the Normal Priority class. In this class, the scheduler can dynamically vary the priority between 1 and 15, but it cannot raise the dynamic priority to the real-time priority class. The real-time priority class ranges from 16 to 31 and cannot vary in priority based on behavior.The Windows NT GUI and its associated utilities can be used to observe various portions of the Oracle8 RDBMS. The Windows menu is used to access the Windows NT control panel.By accessing the Windows NT CONTROL PANEL the administrator can perform various tasks One such task may be to observe, start, or stop any of the various services that are running on the machine. The various services associated with the Oracle RDBMS. The remainder of this chapter is used to investigate the various components of the Oracle8 RDBMS architecture. Where possible the GUI utilities provided by the Windows NT operating system are used to observe the various components of the RDBMS. The same utilities will also help develop our understanding of how the Oracle8 RDBMS is integrated with the Windows NT operating system.ORACLE8 RDBMS ARCHITECTUREThe architecture of the Oracle RDBMS is divided into two distinct parts. One part is called the Oracle database the other part is called the Oracle The Oracle database is defined as:
A logical collection of data to be treated as a unit (tables).
Operating system files called data files, redo log files, initialization files and control files.
The Oracle instance is defined as:
The software mechanism used for accessing and controlling the database.
Having at least four background threads called PMON, SMON, DBWR and LGWR.
Including memory structures called the SGA and the PGA.
Each Oracle instance is identified by a System Identifier (SID).
Instances and databases are independent of each other, but neither is of any use without the other. For the end user to access the database the Oracle instance must be started (the four background threads must be running) and the database must be mounted (by the instance) and opened. In the simple model a database can be mounted by only one instance. The exception to this is the Oracle parallel server, where a database can be mounted by more then one Oracle instance.ORACLE DATABASE STRUCTUREOur discussion of the Oracle RDBMS architecture will first focus on that part that makes up the Oracle database. The Oracle database has both a physical and a logical structure. The physical structure consists of the operating system files that make up the database. The logical structure is determined by the number of tablespaces and the database’s schema objects.TablespacesAll Oracle databases must consist of at least one logical entity called a tablespace. The characteristics of a tablespace are:
One or more per database. The database must have at least one tablespace called “SYSTEM.” The SYSTEM tablespace holds the Oracle Data Dictionary. The Data Dictionary holds the various system tables and views such as the Oracle performance tables, information about the users of the database, and how much space is left in the various tablespaces that make up the database. There are usually more tablespaces other than the SYSTEM tablespace. Most Oracle databases also include additional tablespaces. These tablespaces are used to hold user data for sorting, and indexes that are used to speed up data access. Additional tablespaces should be created to hold data that is being sorted and another tablespace to hold data that is required for read consistency.
The physical representation of the tablespace is called a data file (a tablespace may consist of more then one data file).
Can be taken off line (due to media failure or maintenance purposes) leaving the database running. The exception to this rule is that the SYSTEM tablespace cannot be taken off line if the database is to remain running.
Unit of space for object storage. Objects are tables, indexes synonyms, and clusters.
Contains default storage parameters for database objects.
When an end-user’s Oracle user ID is created the user is given access to a default tablespace and a temporary tablespace (where the sorting of data is performed).
Can be dropped (removed from the database).
As stated previously tablespaces are logical entities. Tablespaces are physically represented by files that are called data files. Data files have the following attributes:
Are operating system files.
There is one or more per tablespace.
The finest granularity of the data file is called the data block.
A collection of data blocks is called an extent.
A segment (by definition) consists of one or more extents (therefore to make a segment larger, extents are added to the segment).
A data file consists of segments.
Contain transaction System Change Numbers (SCNs).
Data File Contents and Types of SegmentsA data file can consist of several types of segments and a segment can consist of one of more extents. The four different types of segments are rollback segments, temporary segments, index segments and data segments.Rollback segments have the following attributes:
Records old data.
Provides for rollback of uncommitted transactions.
Provides information for read consistency.
Used during database recovery from media or processor failure.
Wrap-around/reuseable.
Can be dynamically created or dropped.
Rollback segments contain the following information:
Transaction ID.
File ID.
Block number
Row number
Column number
Row/column data.
Temporary segments have the following attributes:
Used by the Oracle RDBMS as a work area for sorting data.
The DBA defines which tablespace will contain temporary segments and therefore the tablespace where sorting will occur.
Index segments have the following attributes:
Allows for faster data retrieval by providing an index for the data in a table, thus eliminating a full table scan during the execution of a query (similar to how a reader would use the index in a book rather then scanning through the entire book to find a particular topic).
Data segments have the following attributes:
One per table/snapshot.
Contains all table data.
Data segments contain the following information:
Transaction ID.
File ID.
Block number
Row number
Column number
Row/column data.
Besides data files there are also files called redo log files. Redo log files record changes made to the database by various transactions. All changes made to the database will first be written to the redo log file. These files can also be written to an off-line log file (archived). Redo logs are used during database recovery to recover the database to the last physical backup or to the point in time of failure (for this type of recovery the database must be running in ARCHIVELOG mode). Redo log files have the following attributes:
Records new data.
Ensures permanence of data transactions.
Provides for roll forward recovery during database startup and after a media failure.
Redo log files contain:
Transaction IDs
Contents of redo log buffers.
Transaction SCN.
The Control FileEach database has one or more control files. The control file is used to store information about the database. The information in the control file includes:
Transaction System Change Number (SCN)
Location of all datafiles.
Names and locations of the redo log files.
Time stamp when database was created.
Database name.
Database size.
For database recovery purposes it is best to have multiple copies of the control file. Without the control file the Oracle RDBMS cannot find the pointers to the rest of the files that make up the database (data files and redo log files).The INIT.ORA FileThe init.ora file is the database initialization parameter file. It is only read at database start-up time. Every Oracle instance that is running will have its own init.ora file (the user should substitute with the Oracle System IDentifier for their instance). This file contains various initialization and tuning parameters that are needed by the RDBMS. Some of the parameters in the init.ora file are:
The maximum number of processes that the Oracle instance will use (PROCESSES=).
The name of the database (DB_NAME=).
Various parameters for tuning memory management (DB_BLOCK_BUFFERS, SORT_AREA_SIZE...)
The location of the control file(s).
How these parameters affect the starting and running of the database are covered in the chapters on Oracle RDBMS installation and performance analysis and tuning.