Friday, 16 October 2015

Git stash apply vs pop

Stashing takes the dirty state of your working directory - that is the modified tracked files and staged changes - and saves it on a stack of unfinished changes that you can reapply at a time.

Git stash pop removes the stashed change on top and "applies" it to your repo. Git stash apply on the other hand, just applies those changes to your repo, and leaves it on stack to be reused laster, if you want.

Git stash pop is actually git stash apply  followed by git stash drop.


Tuesday, 29 September 2015

Working with branches in Git

Git, with its concept of data as a stream of snapshots rather than the differences over time, local repository etc. provides quite a few benefits over other VCS's. That being said, it is not exactly a piece of cake to get started with. Remember, with great power comes great responsibility.

In this post, I will list out a few very basic commands in order to get started with Git branching.

Create a branch

In order to create a branch (locally), run the following command :-
git checkout -b new_branch
Next up, we need to push this branch to the remote server, which is done as follows :-
git push origin new_branch

Delete a branch locally


The following command deletes a branch from the local repository.
git branch -d new_branch

Sometimes, you might need to force delete a branch if it has changes.
git branch -D new_branch

Delete a branch from remote


A remote branch can be deleted in more than one way.

git push origin --delete new_branch
                             or
git push origin :new_branch

The last command essentially pushes void to the remote branch, which is this deleted.




Wednesday, 11 July 2012

Using PsExec to execute a process remotely

Every now and then, people (particularly admins) need to be able to execute processes/commands remotely against a machine. PsExec proves to be more than handy in such cases. One of the big advantages of using Psexec is that like many other utilities out there, you can execute preocesses remotely. However, unlike most of them, it does not require you to install/store software on the remote systems that you wish to access.

The simplest example of 'remoting' into another system and firing up the terminal would be achieved by simply executing the following script :-
PsExec.exe \\RemoteMachine cmd.exe
It lets you execute commands on the remote machine as if you are logged in. However, I intentionally left out a couple of details. The user you are executing this command against, needs to be added as an user on the remote computer. However, frequently the need is to execute a process in some other account on the remote system. In such cases, you can provide user details as such :-
PsExec.exe \\RemoteMachine -u Domain\username -p password cmd.exe
When you specify a username the remote process will execute in that account, and will have access to that account's network resources. If you omit username the remote process will run in the same account from which you execute PsExec, but because the remote process is impersonating it will bot have access to network resourcess on the remote system. Also, PsExec does not require you to be an admin of the local system. Some command are available only in the CMD shell, and hence the user needs to call "cmd /c".

Another important aspect of PsExec, which would have been noticed by the astute readers is that when you first run PsExec, it asks you to accept an eula. In case a script using Psexec is run by different users and scripts (in order to automate stuff), Psexec requires an extra argument, and that's "accepteula". Not having this argument would mean that the script hangs waiting for a user to accept the eula.
PsExec.exe -accepteula \\RemoteMachine -u Domain\username -p password cmd.exe
In case a particular executable needs to be run and passed parameters, the command looks like :-
PsExec.exe -accepteula \\RemoteMachine -u Domain\username -p password executable arguments
As can probably be seen, its a nifty little tool which can be immensely helpful, particularly if telnet or any other utility is not installed on the remote system, either due to restrictions on installing software or otherwise.

Friday, 29 June 2012

Sql query from NHibernate criteria

Let me first state the requirement clearly. I need to get the Sql query from a NHibernate criteria, which will be fired when the criteria is executed. However, I need to get the sql query without executing the criteria. This was required not for logging the query or debugging any issues.


The specific requirement in my case was to create the criteria based on all the business rules, get the sql query from it (without actually executing the criteria), and use the query as an ADO command and avoid NHibernate, for performance reasons. This does seem a little weird to me though, I must admit. 


The advantage of this approach is that, if achieved, it would help avoid the need for writing a sql parser of our own based on the business rules. The criteria would help us in that regard. By executing the query as an ADO command instead of an NHibernate criteria, will result in performance gains in certain cases. My requirement was to use it while performing data export from tables.


I started of with a simple criteria, and using the criteria walker approach was able to get the sql query from the criteria without actually executing it. My joy was shortlived though, as when I moved towards more complex criteria, with multiple conditions, it started to fail. It was still spitting out the sql query, but without the various parameters, which were substituted by a '?' sign. Another problem  was SetMaxResults(N) was not working. I discarded this approach and started looking at other possible solutions for the same. 


The second approach I though of was to use NHibernate Interceptors. The idea was to get the sql statement from the OnPrepareStatement(SqlString sql) method, and then not go ahead with the transaction. This approach also faced the same problem as the sql statement has the parameters missing. Again.


I also flirted with the idea of executing the criteria against a fake IDbConnection for a while. However, somehow the whole idea seemed a little weird, and I did not go ahead with this approach.


The entire episode was a futile attempt at getting the sql from the criteria. The problem appeared to be the sql parameters, which do not appear in the query. The cause for this is that the parameters are added all over the place. So, for a very simple criteria, whose query will not be parametrized, these approaches might work. However, for the more generic cases, these approaches did not work!

Monday, 4 June 2012

Transaction not connected, or was disconnected error

I recently came across this rather bizarre error. All I needed to do for my work was to import some files into my system. The import process used bulk insert along with some other processing, all inside a transaction.
When the bulk insert is successful, it works great. However, when the bulk insert fails, it results in "Transaction is not connected, or was disconnected" error, when the transaction is rolled back.

The stacktrace for the error is:-
 at NHibernate.Transaction.AdoTransaction.CheckNotZombied() at NHibernate.Transaction.AdoTransaction.Rollback() 

This in itself was not too helpful. However, when I had a look at the NHibernate source code, I had a better idea of what was going on beneath the hood. The definition for the CheckNotZombied() method reveals the clue.

private void CheckNotZombied()
{
if (trans != null && trans.Connection == null)
{
throw new TransactionException("Transaction not connected,                                                or was disconnected");
}
}


Proceeding further, the CheckNotZombied() method is actually called from a couple of places, Committ() and Rollback().
The code for either of these two looks like this :-
                                
                                //do stuff
                                CheckNotDisposed();
CheckBegun();
CheckNotZombied();
                                //do stuff


So, it looked like when the bulk insert query fails, it closes the transaction's connection, which resulted not only this error, but also the start of my very own blog).
A little googling revelaed that bulk insert is actually not transactional, and that sql server by itself will roll back the batches,as mentioned here
The other important point to notice is that when an exception occurs in the bulk insert part, sql server closes the connection. Interestingly, if you decide to catch the exception, and for some reason decide to go ahead with the next bulk insert, sql server will use implicit transactions for that.