Entity Framework Core – Auditing Changes

An update for Entity Framework core to a prior post on Auditing EF changes in EF 5/6, so for my thinking on this issue please read that post.

To start with here’s my table definition, note that I don’t have UserID in this variant as I did not need it for the project that I was working on. But if you reference my older post, you can see how it is used.

CREATE TABLE [dbo].[AuditLog](
	[ID] [int] IDENTITY(1,1) NOT NULL,
	[Action] [nvarchar](2) NOT NULL,
	[TableName] [nvarchar](100) NOT NULL,
	[RecordKey] [nvarchar](255) NULL,
	[ColumnName] [nvarchar](100) NOT NULL,
	[OldValue] [nvarchar](max) NULL,
	[NewValue] [nvarchar](max) NULL,
	[CreateDateTime] [datetime2](7) NOT NULL,
	[AuditGroupingKey] [uniqueidentifier] NOT NULL,
 CONSTRAINT [PK_AuditLog] PRIMARY KEY CLUSTERED ([ID] ASC)
 )

Which gives an entity definition as below, along with code in the datacontext. In this project I was using EF migrations, so the database definition was not required but it’s shown for clarity.

using System;
using System.ComponentModel.DataAnnotations;
using System.ComponentModel.DataAnnotations.Schema;

public partial class AuditLog
{
	[Key]
	[DatabaseGenerated(DatabaseGeneratedOption.Identity)]
	public int Id { get; set; }
	public string Action { get; set; }
	public string TableName { get; set; }
	public string RecordKey { get; set; }
	public string ColumnName { get; set; }
	public string OldValue { get; set; }
	public string NewValue { get; set; }
	public DateTime CreateDateTime { get; set; }
	public Guid AuditGroupingKey { get; set; }
}

Now, for EF core, I started with overriding the SaveChanges method, similar to the EF 5/6 code and it did not work. Couple of minutes later I realized I was using the SaveChangesAsync method and not the SaveChanges, so I needed to add an override for SaveChangesAsync as well. Note that I only override one of the aysnc methods as that is the one I used. Below is just the Async method, the non-async is the same except for the base calls.

public override Task SaveChangesAsync(CancellationToken cancellationToken = default)
{
	// Generate new group key
	var auditGroupingKey = Guid.NewGuid();

	// Get all added entries (save for later)
	var addedEntries = ChangeTracker.Entries().Where(p => p.State == EntityState.Added).ToList();

	// Get all Deleted/Modified entities (not Unmodified or Detached)
	foreach (var entry in ChangeTracker.Entries()
		.Where(p => p.State == EntityState.Deleted || p.State == EntityState.Modified))
	{
		// For each changed record, get the audit record entries and add them
		AuditLogs.AddRange(GetAuditRecordsForChange(entry, auditGroupingKey));
	}

	// Call the original SaveChanges(), which will save both the changes made and the audit records
	var result = base.SaveChangesAsync(cancellationToken);

	if (addedEntries.Count > 0)
	{
		// Process each added entry now that the keys have been created
		foreach (var entry in addedEntries)
		{
			// For each added record, get the audit record entries and add them
			AuditLogs.AddRange(GetAuditRecordsForChange(entry, auditGroupingKey, true));
		}

		// Call the original SaveChanges() to save the audit records
		base.SaveChangesAsync(cancellationToken);
	}

	return result;
}

And finally the code that processes the changes and creates the Audit rows. Note that EF core provides much cleaner methods to get the tablename, keyname and values. So lets start by getting these values.

private IEnumerable GetAuditRecordsForChange(EntityEntry dbEntry, Guid auditGroupingKey, bool isNewEntry = false)
{
	// Current DateTime
	var currentDateTime = DateTime.Now;

	// Get table name
	var tableName = Model.FindEntityType(dbEntry.Entity.GetType()).GetTableName();

	// Create results
	var results = new List();

	// We do not want to audit the audit log
	if (tableName.Equals("AuditLog", StringComparison.OrdinalIgnoreCase))
	{
		// Return empty results
		return results;
	}

	// Get key name from meta data
	var keyProperty = dbEntry.Properties.FirstOrDefault(p => p.Metadata.IsKey());
	var keyValue = keyProperty?.CurrentValue.ToString();

	// Get foreign key values
	var foreignKeyNames = dbEntry.Properties.Where(p => p.Metadata.IsForeignKey()).Select(p => p.Metadata.Name).ToList();

Based on the action we are dealing with we need to populate the table rows with different data. So for insert we only care about the new values, that are modified i.e. not the default value for the field or null and also a column name we want to audit. So in my projects, I don’t need to record things like the createdatetime as that is part of the auditrow.

	if (dbEntry.State == EntityState.Added || isNewEntry)
	{
		results.AddRange(
			dbEntry.Properties
				.Where(p => p.CurrentValue != null && ValidColumnName(p.Metadata.Name))
				.Select(mp => new AuditLog
				{
					Action = ActionInsert,
					TableName = tableName,
					RecordKey = keyValue,
					ColumnName = mp.Metadata.Name,
					NewValue = mp.CurrentValue?.ToString(),
					CreateDateTime = currentDateTime,
					AuditGroupingKey = auditGroupingKey
				}));
	}

For deleted rows, we care only about the original values, so no checks for modified values. I could just have recorded the row key and no other value as it is being deleted and we probably don’t care what was in the row at that time.

	else if (dbEntry.State == EntityState.Deleted)
	{
		// For deletes, add the whole record
		results.AddRange(
			dbEntry.Properties
				.Where(p => ValidColumnName(p.Metadata.Name))
				.Select(mp => new AuditLog
				{
					Action = ActionDelete,
					TableName = tableName,
					RecordKey = keyValue,
					ColumnName = mp.Metadata.Name,
					NewValue = mp.OriginalValue?.ToString(),
					CreateDateTime = currentDateTime,
					AuditGroupingKey = auditGroupingKey
				}));
	}

For modified rows, we care modified values and foreign key values, for referencing other entities that have been modified. This is where the AuditGroupingKey comes in usefull. If I modify entity A, which has a child of enitty b which i also modify i.e. order, orderline, then with the group key and foreign key I can match them all up.

	else if (dbEntry.State == EntityState.Modified)
	{
		// For updates ....
		results.AddRange(
			dbEntry.Properties
				.Where(p => (p.IsModified || foreignKeyNames.Contains(p.Metadata.Name)) && ValidColumnName(p.Metadata.Name))
				.Select(mp => new AuditLog
				{
					Action = ActionUpdate,
					TableName = tableName,
					RecordKey = keyValue,
					ColumnName = mp.Metadata.Name,
					OldValue = mp.OriginalValue?.ToString(),
					NewValue = mp.CurrentValue?.ToString(),
					CreateDateTime = currentDateTime,
					AuditGroupingKey = auditGroupingKey
				}));
	}

And finally here column name check code, very simple, it could certainly be made more flexible perhaps with attributes on properties to ignore auditing on a per table per property basis.

	private bool ValidColumnName(string columnName)
	{
		return !columnName.Equals("CreateDateTime", StringComparison.OrdinalIgnoreCase) &&
		       !columnName.Equals("LastModifiedDateTime", StringComparison.OrdinalIgnoreCase) &&
		       !columnName.Equals("UpdateDateTime", StringComparison.OrdinalIgnoreCase) ;
	}

So this is the update to my old EF5/6 code for EF Core.

Advertisement
Posted in Entity Framework | Leave a comment

Using SQL Identity Insert with Entity Framework 6 – Making it work

When you have a primary key field (say ID) with IDENTITY switched on it works beautifully with EF when doing inserts, etc. Everything you would expect it to, except when you need to turn off IDENTITY insert temporarily for, say data seeding or in my case some complex data conversion where I wanted to keep the original ID values to reference back to the old system. You would think there would be an option hidden somewhere within EF to do this but there isn’t, or at least after a number of searches I could not locate one. Well I know that there is SQL I can run that would temporarily allow me to insert IDENTITY values and then to revert back i.e.

SET IDENTITY_INSERT myTablename ON
...Insert data ....
SET IDENTITY_INSERT myTablename OFF

Well there is a way in EF via the Database class to execute arbitrary SQL.

myDatabaseContext.Database.ExecuteSqlCommand("SET IDENTITY_INSERT myTablename");

Unfortunately this seems to execute but then when running SaveChanges the identity value is ignored and a new value is used i.e. the database generates it. Looking at the SQL sent by EF to the database I can see that the ID field is never passed. This is strange, I’ve set IDENTITY insert ON and not supplied an ID field at all and the SQL works. Something strange is going on here.

Well first we need to solve the ID field not being passed in the SQL generated from EF. Looking at the entity class and the database context class I find nothing specifying that the ID field is a key field and that is has IDENTITY turned on. Well, it turns out that if there is only one key field with IDENTITY specified then this is the EF default, so no attributes or code is needed to specify it. So how do I change this. Well, if my key field was not an IDENTITY field there would be an attribute stating so i.e.

[DatabaseGenerated(DatabaseGeneratedOption.None)]
public int ID { get; set; }

But how do we do this without altering the entity class, after all, we only need this temporarily. The solution is to create a new Database Context class and override the OnModelCreating. In the override function we can set the attribute on the entity i.e.

protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
  modelBuilder.Entity<myTableName>()
    .Property(e => e.ID)
    .HasDatabaseGeneratedOption(DatabaseGeneratedOption.None);
  base.OnModelCreating(modelBuilder);
}

So that is the entity class fixed. So now I run my code again, turning IDENTITY INSERT on, saving the data and then reverting.

myDatabaseContext.Database.ExecuteSqlCommand("SET IDENTITY_INSERT myTablename ON");
myDatabaseContext.SaveChanges();
myDatabaseContext.Database.ExecuteSqlCommand("SET IDENTITY_INSERT myTablename OFF");

And I get a SQL exception that we have tried to insert an ID value for an IDENTITY field. What is going on here, the SQL looks good and is running against the same context but it does not work. Well, it turns out that when you run ExecuteSqlCommand EF creates a separate session for this, runs the command and then closes the session. So the IDENTITY INSERT is lost as it only lasts for the duration of the session. Now we have a major problem, how can I get SQL to run in the same sessions as the ‘SaveChanges’ action. Well, after some digging, experimenting and hijacking of other code I have a workable solution. In EF there is the ability to intercept the commands prior/post execution for logging purposes by defining a class based on the IDbCommandInterceptor interface and then adding it during the initialization phase of the database context. Now I’ve been using this in my code to log the details of the SQL generated to make sure it is not doing anything horrific and also to get better information out of EF when a failure occurs i.e. my LogIfError method looks like this

private void LogIfError<TResult>(DbCommand command,
  DbCommandInterceptionContext<TResult> interceptionContext)
  {
    if (interceptionContext.Exception != null)
    {
    if (interceptionContext.Exception is SqlException)
    {
      var sqlException = (SqlException) interceptionContext.Exception;
      log.Info("SQLException '{0}/{1}/{2}'", sqlException.ErrorCode, sqlException.Number, sqlException.Message);
    }
    log.Error("Command {0} failed with exception {1}", command.CommandText,   interceptionContext.Exception);
  }
}

This works nicely but there is also another method called NonQueryExecuting which is called prior to executing the SQL command and it passes in the SQL command. So I now have the raw SQL, what can I do to inject the IDENTITY INSERT sql prior to running the command. Surprisingly easily, it the answer. Just inject the IDENTITY SQL into the SQL command text string, right at the front. So what does this look like in practice. Starting with the interceptor class we have

public class EFLogCommandInterceptor : IDbCommandInterceptor
{
  public void NonQueryExecuting(DbCommand command, 
    DbCommandInterceptionContext interceptionContext)
  {
    if (!string.IsNullOrWhiteSpace(SQLInjectText ))
    {
      command.CommandText = SQLInjectText + " " + 
        command.CommandText;
    }
    LogIfNonAsync(command, interceptionContext);
  }

  public string SQLInjectText { get; set; }

Next we need to add it into the database context, shown below is a simplified version of my database context. Note the addition of the interceptor class during initialization and then in my save changes I pass the name of the IDENTITY table (only one at a time is allowed by SQL) to pass through the the intercept class

public class IdentityContext : myDatabaseContext
{
  public IdentityContext ()
  {
    commandInterceptor = new EFLogCommandInterceptor();
    DbInterception.Add(commandInterceptor)
  }

  protected override void OnModelCreating(DbModelBuilder modelBuilder)
  {
    modelBuilder.Entity<mytablename>()
      .Property(e => e.ID)
      .HasDatabaseGeneratedOption(DatabaseGeneratedOption.None);
      base.OnModelCreating(modelBuilder);
  }

  public int SaveChanges(string identityTable = null)
  {
    if (identityTable != null)
    {
      commandInterceptor.SQLInjectText = 
        string.Format("SET IDENTITY_INSERT {0} ON", identityTable);
    }
    return SaveChanges();
  }

  private static EFLogCommandInterceptor commandInterceptor;
}

So now we run this code and it works, I get IDENTITY value insert when I need it without causing any problems with my original database context or entity classes. I wish there was an easier way but I have yet to find it.

Posted in Entity Framework | 2 Comments

Entity Framework 5/6 – Auditing Changes

It’s taken a while to put together this post but a couple of years back I was looking for a way to audit changes to the SQL database made when using Entity Framework. After a little searching I found this post Using Entity Framework 4.1 DbContext Change Tracking for Audit Logging by Justin Dority that gave me an excellent starting point. Now this worked to some degree for me but at the time I was using EF Code First with the Fluent API . Note that in current Code First EF (6.x) we no longer use as much of the Fluent API to define keys but instead use data annotations i.e. a key field attribute which may or may not be present. Fun, fun, fun.

So some things worked and some did not (like the key field retrieval) and when adding entities the audit row was created prior to the key being generated (for IDENTITY fields) so it was not very helpful finding the row later on.

Time for me to tweak the code to a) make key retrieval as bullet proof as I could,  b) deal with IDENTITY key values and c) other minor tweaks.

So starting with my table definition we have

CREATE TABLE [dbo].[AuditLog](
	[ID] [int] IDENTITY(1,1) NOT NULL,
	[Action] [nvarchar](2) NOT NULL,
	[TableName] [nvarchar](100) NOT NULL,
	[RecordKey] [nvarchar](max) NULL,
	[ColumnName] [nvarchar](100) NOT NULL,
	[OldValue] [nvarchar](max) NULL,
	[NewValue] [nvarchar](max) NULL,
	[CreateDateTime] [datetime2](7) NOT NULL,
	[CreateUserID] [int] NOT NULL,
	[AuditGroupingKey] [uniqueidentifier] NOT NULL,
 CONSTRAINT [PK_AuditLog] PRIMARY KEY CLUSTERED ([ID] ASC)
 )

Which gives an EF Code-First definition of

using System;
using System.ComponentModel.DataAnnotations;
using System.ComponentModel.DataAnnotations.Schema;

[Table("AuditLog")]
public partial class AuditLog
{
  public int ID { get; set; }

  [Required, StringLength(2)]
  public string Action { get; set; }

  [Required, StringLength(100)]
  public string TableName { get; set; }

  public string RecordKey { get; set; }

  [Required, StringLength(100)]
  public string ColumnName { get; set; }

  public string OldValue { get; set; }

  public string NewValue { get; set; }

  [Column(TypeName = "datetime2")]
  public DateTime CreateDateTime { get; set; }

  public int CreateUserID { get; set; }

  public Guid AuditGroupingKey { get; set; }
}

So if you look at Justin’s code he does all the audit row creation up front prior to the actual save. But as I have IDENTITY fields for which we will only know the value later I will need to split the audit row creation into two steps, the first for updates and deletes, and the second for additions. Why did I split and not just move after the save changes call. To avoid multiple calls to SaveChanges if possible i.e. if all we have are updates and deletes then there is only the need for the one call. So here is my code and note that I always have the user (it’s a WPF app). Note that I check for a particular Entity Type of FileData. This is a FileStream entity class so I don’t want to attempt to store file changes in the audit log.

public override int SaveChanges()
{
  // Find current user and save
  return SaveChanges(App.CurrentUser.ID);
}

And here is the core SaveChanges code

public int SaveChanges(int userID)
{
  using (var scope = new TransactionScope())
  {
    // Generate new group key
    var auditGroupingKey = Guid.NewGuid();

    // Get all added entries (save for later)
    var addedEntries = ChangeTracker.Entries().Where(p => p.State == EntityState.Added && !(p.Entity is FileData)).ToList();

    // Get all Deleted/Modified entities (not Unmodified or Detached)
    foreach (var entry in ChangeTracker.Entries().Where(p => (p.State == EntityState.Deleted || p.State == EntityState.Modified) && !(p.Entity is FileData)))
    {
      // For each changed record, get the audit record entries and add them
      AuditLogs.AddRange(GetAuditRecordsForChange(entry, userID, auditGroupingKey));
    }

    // Call the original SaveChanges(), which will save both the changes made and the audit records
    var result = base.SaveChanges();

    if (addedEntries.Count > 0)
    {
      // Process each added entry now that the keys have been created
      foreach (var entry in addedEntries)
      {
        // For each added record, get the audit record entries and add them
        AuditLogs.AddRange(GetAuditRecordsForChange(entry, userID, auditGroupingKey, true));
      }

      // Call the original SaveChanges() to save the audit records
      base.SaveChanges();
    }

    scope.Complete();
    return result;
  }
}

And finally the code that processes the changes and creates the Audit rows. Start by getting the key fields and foreign keys

private IEnumerable GetAuditRecordsForChange(DbEntityEntry dbEntry, int userID, Guid auditGroupingKey, bool isNewEntry = false)
{
    var results = new List();
    var currentDateTime = DateTime.Now;

    // Get the Table() attribute, if one exists
    var tableAttribute = dbEntry.Entity.GetType().GetCustomAttributes(typeof (TableAttribute), true).SingleOrDefault() as TableAttribute;

    // Get table name (if it has a Table attribute, use that, otherwise get the name, trimmed from the proxy name if needed)
    // Is this a proxy class, if so get the base type name otherwise use the current type name
    var type = dbEntry.Entity.GetType();
    var tableName = tableAttribute != null ? tableAttribute.Name : (type.Namespace.Equals("System.Data.Entity.DynamicProxies") ? type.BaseType.Name : type.Name);

    // We do not want to audit the audit log
    if (tableName.Equals("AuditLog", StringComparison.OrdinalIgnoreCase))
    {
        return results;
    }

    // Get key name from meta data
    var keyName = ObjectContext
        .MetadataWorkspace
        .GetEntityContainer(ObjectContext.DefaultContainerName, DataSpace.CSpace)
        .BaseEntitySets
        .First(m => m.ElementType.Name == tableName)
        .ElementType
        .KeyMembers
        .Select(k => k.Name)
        .FirstOrDefault();

    // Get foreign key values (for later parsing when showing history)
    var foreignKeyProperties = GetForeignKeyValues(tableName);

Based on the action we are dealing with then

    // What was the action?
    if (dbEntry.State == EntityState.Added || isNewEntry)
    {
        // For inserts, add the record (except for null fields)
        results.AddRange(
            dbEntry.CurrentValues.PropertyNames
            .Where(cn  => dbEntry.CurrentValues.GetValue(cn) != null)
            .Select(cn => new AuditLog
            {
                Action = "A",
                TableName = tableName,
                RecordKey = dbEntry.CurrentValues.GetValue(keyName).ToString(),
                ColumnName = cn,
                NewValue = dbEntry.CurrentValues.GetValue(cn) == null ? null : dbEntry.CurrentValues.GetValue(cn).ToString(),
                CreateDateTime = currentDateTime,
                CreateUserID = userID,
                AuditGroupingKey = auditGroupingKey
            }));
    }
    else if (dbEntry.State == EntityState.Deleted)
    {
        // For deletes, add the whole record
        results.AddRange(
            dbEntry.OriginalValues.PropertyNames
            .Select(cn => new AuditLog
            {
                Action = "D",
                TableName = tableName,
                RecordKey = dbEntry.OriginalValues.GetValue(keyName).ToString(),
                ColumnName = cn,
                NewValue = dbEntry.OriginalValues.GetValue(cn) == null ? null : dbEntry.OriginalValues.GetValue(cn).ToString(),
                CreateDateTime = currentDateTime,
                CreateUserID = userID,
                AuditGroupingKey = auditGroupingKey
            }));
    }
    else if (dbEntry.State == EntityState.Modified)
    {
        // For updates
        results.AddRange(
            dbEntry.OriginalValues.PropertyNames
                .Where(cn => !Equals(dbEntry.OriginalValues.GetValue(cn), dbEntry.CurrentValues.GetValue(cn)) || foreignKeyProperties.Contains(cn))
                .Select(cn => new AuditLog
                {
                    Action = "U",
                    TableName = tableName,
                    RecordKey = dbEntry.OriginalValues.GetValue(keyName).ToString(),
                    ColumnName = cn,
                    OldValue = dbEntry.OriginalValues.GetValue(cn) == null ? null : dbEntry.OriginalValues.GetValu(cn).ToString(),
                    NewValue = dbEntry.CurrentValues.GetValue(cn) == null ? null : dbEntry.CurrentValues.GetValue(cn).ToString(),
                    CreateDateTime = currentDateTime,
                    CreateUserID = userID,
                    AuditGroupingKey = auditGroupingKey
                }));
    }

    // Otherwise, don't do anything, we don't care about Unchanged or Detached entities so return the audit rows
    return results;
}

And finally here is the code to get the foreign key fields. I still have to figure out a better way to derive foreign key field information but the By Convention simplistic approach works I show here works with my current database design. It should be possible to get this from the meta data but as of the last time I checked it just was not. I’m hoping in EF 7 we can get this information more easily.

private IEnumerable GetForeignKeyValues(string tableName)
{
    // Get entities
    var items = ObjectContext.MetadataWorkspace.GetItems(DataSpace.CSpace).First(i => i.Name.Equals(tableName));

    // Create foreign keys (this is based on convention i.e. foreign key table name + ID)
    var foreignKeys = items.DeclaredNavigationProperties.Select(item => string.Format("{0}", item.Name)).ToList();

    return foreignKeys;
}

So there is my variation on the great work done by Justin.

So while it’s taken a while to blog about this (or at least to finish the post), it has been in use for over 2 years without issues and I hope it helps.

Posted in Entity Framework | 1 Comment

Entity Framework – Discarding changes

While this may seem old to some, it’s going to be new to others I hope. I’m in the midst of writing an internal application in WPF using Entity Framework (Code First) for the data both of which are reasonably ‘new’ to me. As in I’ve played with them but yet to really work with them full scale. This may seem strange to some but my company is slow to change and introduction of new technology is a slow and painful process. One issue I encountered is being able to discard any changes made to an entity. Let me explain.

I have a grid displaying a list of data. An edit button takes the user to a popup edit form. As the entity is passed from the grid and everything is bound together, any changes in the edit form are reflected in the grid. So hitting Save at this point is really just to update the contents of the database. The problem occurs when the user decides to discard the changes. Hit cancel closes the form but does not revert the changes to the underlying data entity. So what we have now is a grid showing data that does not match the database. Not very good at all.

I started to research ways to revert the entity back to it’s original value and found that I could use the Reload method. So I came up with this piece of code to reattach my entity to my data context and reload the values.

// Find the 'entry' and reload it
var entityEntry = dataContext.Entry(dataEntity);
entityEntry .Reload();

So while this may or may not be the most efficient it works. Or at least I thought it did until I started moving to more complex edit forms with contained grids for ‘child’ data.

For this child data I’m using the same pattern. Editing is via a popup edit form. The reload did not refresh any of the child data. So a bit of searching resulted in the following code

var refreshableObjects =
  dataContext.ObjectContext.ObjectStateManager.GetObjectStateEntries(
    EntityState.Added | EntityState.Deleted | EntityState.Modified |  EntityState.Unchanged)
  .Where(i => i.EntityKey != null).Select(i => i.Entity);

  dataContext.ObjectContext.Refresh(RefreshMode.StoreWins, refreshableObjects);

Note that I have an property coded to get the ObjectContext from the dataContext

public ObjectContext ObjectContext
{
  get { return ((IObjectContextAdapter) this).ObjectContext; }
}

So I tried this and it worked, all new entities where removed, all update entities where reverted and all deleted entities where NOT restored. Yep, I still had an issue. My delete code was simple, I use a button in the grid to delete the row, nothing fancy.

// Get the data context of the grid row
var item = (DataType) ((Button) sender).DataContext;</code>

// Remove from the child collection
dataContext.childItems.Remove(item);

Seems simple enough and it works. Searching around I couldn’t figure out why this code caused in issue in EF. After a bit of head scratching I realized that my refresh code was looking for entities in the data context in the Deleted state, whereas I was actually deleting the entity, not setting it’s state. So I changing the delete code to

// Get the data context of the grid row
var item = (DataType) ((Button) sender).DataContext;

// 'Delete it' in the data contexts
dbContext.Entry(item).State = EntityState.Deleted;

// Rebind grid
childGridView.Rebind();

This was the correct change. I’m now marking the entity as deleted, so that my refresh code can find it and the grid rebind, refreshes my grid which correctly does not show the deleted item.

So is the best solution to the issue, I can’t say that I know that as there seem to be many ways to try and refresh data in EF, but this way works and I’m happy with that.

Posted in Entity Framework | Leave a comment

Tropical Storm Andrea

I have to consider myself lucky. I work for a company that allows local staff to work from home up to 3 days a week. This was one of my days to work from home and with a TD passing through it worked out well. Where I lived was windy and wet, no more so than a typical FL thunderstorm but at my office it was pretty nasty. They where hit by one of the bad storm bands that passed through the area. Everyone was OK, but is was not pleasant to go through. So I got to avoid the journey to work, the weather at the office and wondering what was happening back at home. Instead I had live feeds on my office TV, radar on my personal laptop and still managed to put a full days work in and not forgetting our CEO sent an email out stating they where thinking about us. Sometimes, no matter what work can be like, you have to appreciate what you have.

Posted in Other | Leave a comment

Visual Studio 2013

So while it is great that Microsoft is releasing a new version of Visual Studio 2013 so quickly, it looks like we will be exposed to more bloatware. A small hint MS, corporations tend not to want all their IP up in the cloud for anyone to access. Yes, I know it’s secure and probably a lot more so than a lot of corporations but once compromised these cloud (ok, let drop the market speak, it’s just the internet like it’s always been) services can serve up a lot more IP. So we have more cloud BS, and yet the bugs and awful UI continue. But I guess MS is more concerned with the 100 million hobbyist devs, than the 12 million professional devs and note VS2012 has had 4 million downloads, wonder what the breakdown is? So let’s slam in new features and screw around with the UI to promote the latest corporate touch look and feel. Touchscreen, yeh, that’s how I’m going to use VS, just going to put in that req to replace my 24 inch LCD with a shiny new touch screen in these tough economic times, not. Rant over, I’ll stick with VS2010 even though I have an MSDN Ultimate license and can change whenever,

 

Posted in Computers and Internet | Leave a comment

First two Code Camps of the Year

I have to say that the first two code camps of the year that I have attended (South Florida and Orlando) where great. Lots of interesting talks, good people to talk to and network with. A big thanks from me to all those who put a lot of time and effort into organizing these events. I do have to give a definite win to Orlando to for the code camp T-Shirt, it has to rate as the best themed coding T-Shirt I have seen.

Posted in Code Camps | Leave a comment

A Big Apology to Code Camp Attendees

S a while back I thought I’d been hacked and decided as a matter of course to get rid of all my accounts that I had recently accessed and replace them with new accounts. I thought I had sorted out all of related accounts and information but I missed my blog. So that tells you two things, I need to organize my information a little better and I really should blog a bit more. This was brought to light by my recent attendance at Orlando Code camp where an attendee at one of my presentations later told me that my blog links where broken. I wish I had that persons name to thank them otherwise this would have gone unnoticed. As I have also presented at South Florida code camp then I’ve been guilty of having a broken setup.

So find attached to this blog post the links to my presentations and code resources for all my talks, and if you have issues getting to this content, please try using IE, I’ve noticed that Firefox has issues.

OpenXML
Presentation : OpenXml – Do It Yourself Office>
Code : OpenXml – Do It Yourself Office Code

NuGet:
Presentation: NuGet – What is it Why should I use it

OData
Presentation: Building OData WCF Providers
Code: Building OData WCF Providers Code

Posted in Code Camps | Leave a comment

Developing for Windows 8 RT devices

Quick rant on this. Why on earth do I have to be using a dev machine with Win 8 to actually develop for a Win 8 RT device. Visual Studio 2012 has a simulator, so I don’t have to run the app natively. I should be able to develop for Win 8 RT on a Win 7 machine. Anyone at Microsoft listening? It almost feels like the Apple camp in that if you want to developer for an iPhone you are tied to buying a whole bunch of the latest Apple kit to be able to play. Remember MS, you need us, the developers to make this platform a success. Having to upgrade my dev machine to Win 8 to be able to work with RT apps (ok, I dual booted it) is a pain. Win 8 is not a desktop O/S, watching the MS staff at Build proved this to me, at lot of back and forth with the ‘Start’ screen via keyboard shortcuts, gestures, mouse moves, this is not productive. Oh and don’t get me started on a dev license that only lasts for 4 months and then has to be renewed.

Posted in Computers and Internet | Leave a comment

Living with a Windows Surface tablet

Attending Build 2012 was an experience not to be forgotten. I, like probably all other attendees was wondering what surprise MS would bring out during the keynote. To our surprise there was no new surprises just a reinforcing of already announced/delivered technologies. That’s not to say that getting a Windows Surface, Nokia 920 Win 8 phone were not great to get, but they where already announced or already delivered to the public. Which brings me to the point of my post, living with a Surface tablet, note that I’m not going through this feature by feature but more random musings on it.

So far, so good, more or less. The hardware is well designed and at least for me is fairly durable. The touch keyboard while strange at first, is gradually growing on me. For a none touch typist I think it works well with my hunt and peck style. The finger, screen interface is a bit more challenging due to my fingers not liking any kind of touch interface.

Apps are constantly appearing in the store but I wish the store had a bit more info per item. An icon, one line of text and a price is not sufficient in my mind to attract any attention to any app and after the store fills with apps good luck on finding anything. Now there may be a mode that shows more information per tile but if there is it escapes me and with the lack of manuals in the packaging, who knows what lies in the depths.

Therein lies a problem. Perhaps I’ve been a developer to long, as it is taking me quite a long time to figure out how to use this device and all the touch gestures and features. Maybe and probably a 5 year old would find it much more intuitive than I do. I spent an hour at build with one of the MS staff members getting them to show me what to do, I have to apologize as I do not remember his name but he was helpful to this old developer in explaining what to do.

WiFi only, while there are plenty of free access points around including airports and the like, it would have been nice and probably helped with getting user buy in if the Surface had a cellular interface. I may be the only one but my office is not friendly to equipment that is not provided by them. In that case Connectify is your friend but don’t tell them I told you.

So I’m writing this blog post from my Surface using the keyboard and ‘mouse pad’ and it is all more or less familiar. The touch features I’m getting used to and as a tablet OS it seems to work. As a desktop, laptop OS it’s not optimal, ok, I think it’s garbage, I would have preferred to see the clean OS of the Win 7 Phone expanded to the tablet and the desktop/laptop Win 7 O/S continued. I’m a professional dev and I have to work with what my company wants me to use and that is Win 7 as we have just moved to it. All the touch fun is way beyond imagining in any of our software, and I frankly see no way to integrate it.

So my conclusion in a roundabout way is that I like my Surface as a portable, occasional use device but as a machine to do work with, I doubt it’s capabilities but I think that is more a failing of the O/S and not the hardware.

Posted in Computers and Internet | Leave a comment